JPBenowitz
- 144
- 2
I don't understand how one reconciles a "real" wave function with its instantaneous collapse. To me this is an obvious violation of causality.
JPBenowitz said:I don't understand how one reconciles a "real" wave function with its instantaneous collapse. To me this is an obvious violation of causality.
stevendaryl said:I don't know why you would say that in the lab, there is no other way than the frequentist interpretation. Bayesian probability works perfectly fine in the lab. The differences between bayesian and frequentist really only comes into play at the margins, when you're trying to figure out whether your statistics are good enough to make a conclusion. The frequentists use some cutoff for significance, which is ultimately arbitrary. The bayesian approach is smoother--the more information you have, the stronger a conclusion you can make, but you can use whatever data you have.
Since frequentist probability only applies in the limit of infinitely many trials, there isn't a hard and fast distinction between a single event and 1000 events. Neither one implies anything about the probability, strictly speaking.
atyy said:A simple example is to simply taken the quantum formalism as it is and add that the wave function in a particular frame (the aether) is real. The wave function in other frames will not be real, but predictions made using the quantum formalism in any frame will be the same as predictions made in the aether frame, so one cannot tell which frame is the aether frame. This can be seen in http://arxiv.org/abs/1007.3977.
This is not consistent with classical relativistic causality, but it doesn't matter since relativity does not require classical relativistic causality, only that the probabilities of events are frame-invariant and that classical information should not be transmitted faster than light. The main problem with this way of making the wave function real is not relativity, but that it leaves the measurement problem open.
vanhees71 said:What I don't like most about this Qbism is the idea that the quantum state is subjective. That's not, how it is understood by the practitioners using QT as a description of nature: A state is defined by preparation procedures. A preparation can be more or less accurate, and usually you don't prepare pure states but mixed ones, but nevertheless there's nothing subjective in the meaning of states.
vanhees71 said:This is a contradiction in itself: If you assume a relativistic QFT to describe nature, by construction all measurable (physical) predictions are Poincare covariant, i.e., there's no way to distinguish one inertial frame from another by doing experiments within quantum theory. As Gaasbeek writes already in the abstract: The delayed-choice experiments can be described by standard quantum optics. Quantum optics is just an effective theory describing the behavior of the quantized electromagnetic field in interaction with macroscopic optical apparati in accordance with QED, the paradigmatic example of a relativistic QFT, and as such is Poincare covariant in the prediction about the observable outcomes, and quantum optics indeed is among the most precisely understood fields of relativistic quantum theory: All predictions are confirmed by high-accuracy experiments. So quantum theory cannot reintroduce an "aether" or however you like to call a "preferred reference frame" into physics! By construction QED and thus also quantum optics fulfills relativistic causality constraints too!
vanhees71 said:What I don't like most about this Qbism is the idea that the quantum state is subjective.
vanhees71 said:My point is that no matter how you metaphysically interpret the meaning of probabilities, in the lab you have to "get statistics" by preparing ensembles of the system under consideration. The Qbists always mumble something about that there is some meaning of probabilities for a single event, but in my opinion that doesn't help at all to make sense of the probabilistic content of the quantum mechanical state.
atyy said:No, there is no contradiction, it just seems very superfluous to modern sensibilities where we are used to having done away with the aether, since we cannot figure which frame is the aether frame. But Lorentz Aether Theory and its "invisible aether" makes the same predictions as the standard "no aether" formulation of special relativity, and in fact one can derive the standard "no aether" formulation of special relativity from Lorentz Aether Theory, so there cannot be a contradiction, unless special relativity itself is inconsistent.
Ok, then what's in your view the difference between Bayesian and frequentist interpretations of probabilities, particularly the statement probabilities make sense for a single event?stevendaryl said:I don't think Qbism adds much (if anything) to the understanding of quantum mechanics, but I was simply discussing bayesianism (not necessarily quantum). Bayesian probability isn't contrary to getting statistics--there is such a thing as "bayesian statistics", after all. As I said, the difference is only in how you interpret the resulting statistics.
Hm, I don't think that collapse is needed in probabilistic theories. What's the point of it? I throw the die, ignoring the details of the initial conditions and get some (pseudo-)random result which I read off. Why then should there be another physical process called "collapse"? The probabilities for some outcome is simply the description of my expectation how often a certain outcome of a random experiment will occur when I perform it under the given conditions. The standard assumption ##P(j)=1/6## is due to the maximum-entropy principle: If I don't know anything about the die, I just take the probability distribution of maximum entropy (i.e., the least prejudice) in the sense of the Shannon entropy. This hypothesis I can test with statistical means in an objective way throwing the die very often. Then you get some new probaility distribution according to the maximum entropy principle due to the gained statistical knowledge, which may be more realistic, because it turns out that it's not a fair die. Has then anything in the physical world "collapsed", because I change my probabilities (expectations about the frequency of outcomes of a random experiment) according to more (statistical) information about the die? I think not, because I don't know, what should that physical process called "collapse" should be. Also my die remains unchanged etc.atyy said:I don't think QBism makes sense, but many aspects of it seem very standard and nice to me. For example, how can we understand wave function collapse? An analogy in classical probability is that it is like throwing a die, where before the throw the outcome is uncertain, but after the throw the probability collapses to a definite result. Classically, this is very coherently described by the subjective Bayesian interpretation of probability, from which the frequentist algorithms can be derived. It is fine to argue that the state preparation in QM is objective. However, the quantum formalism links measurement and preparation via collapse. If collapse is subjective by the die analogy, then because collapse is a preparation procedure, the preparation procedure is also at least partly subjective.
vanhees71 said:Hm, I don't think that collapse is needed in probabilistic theories. What's the point of it? I throw the die, ignoring the details of the initial conditions and get some (pseudo-)random result which I read off. Why then should there be another physical process called "collapse"? The probabilities for some outcome is simply the description of my expectation how often a certain outcome of a random experiment will occur when I perform it under the given conditions. The standard assumption ##P(j)=1/6## is due to the maximum-entropy principle: If I don't know anything about the die, I just take the probability distribution of maximum entropy (i.e., the least prejudice) in the sense of the Shannon entropy. This hypothesis I can test with statistical means in an objective way throwing the die very often. Then you get some new probaility distribution according to the maximum entropy principle due to the gained statistical knowledge, which may be more realistic, because it turns out that it's not a fair die. Has then anything in the physical world "collapsed", because I change my probabilities (expectations about the frequency of outcomes of a random experiment) according to more (statistical) information about the die? I think not, because I don't know, what should that physical process called "collapse" should be. Also my die remains unchanged etc.
Also for me there is no difference between the quantum mechanical probabilities and the above example of probabilities applied in a situation where the underlying dynamics is assumed to be deterministic in the sense of Newtonian mechanics. The only difference is that the probabilistic nature of our knowledge is in the quantum case not just because of the ignorance of the observer (in the die example about the precise initial conditions of the die as a rigid body, whose knowledge would enable us in principle to predict with certainty the outcome of the individual toss, because it's a deterministic process) but it's principally not possible to have determined values for all observables of the quantum object. In quantum theory on those observables have a determined value (or a value with very high probability) which have been prepared but then necessarily other observables that are not compatible with those which have been prepared to be (pretty) determined are (pretty) undetermined. Then I do a measurement on an indivdual so prepared system of such an undetermined observable and get some accurate value. Why should there be any collapse, only because I found a value? For sure there's an interaction of the object with the measurement apparatus, but that's not a "collapse of the state" but just an interaction. So also in the quantum case there's no necessity at all to have a strange happening called "collapse of the quantum state".
vanhees71 said:I guess, it's what's called epistemic. But again, why do you consider the "collapse" as essential? Where do you need it?
atyy said:Hmmm, are we still disagreeing on this?
bhobba said:Its a logical consequence of the assumption of continuity - but has anyone every measured a state again an infinitesimal moment later to check if the assumption holds?
bhobba said:Its not really needed - one simply assumes the filtering type measurement it applies to is simply another state preparation. Of course a systems state changes if you prepare it differently - instantaneously - well that's another matter.
atyy said:the preparation procedure involves choosing the sub-ensemble based on the outcome of the immediately preceding measurement. So preparation and measurement are linked.
bhobba said:Exactly how long does that prior measurement take to prepare the system differently? And what's the consequence for instantaneous collapse? Think of the double slit. The, say electron, interacts with the screen and decoheres pretty quickly - but not instantaneously. We do not know how the resultant improper state becomes a proper one - but I doubt however that's done its not instantaneous - although of course one never knows.
atyy said:non-unitary evolution of wave function collapse (suitably generalized).
bhobba said:Sure - but in modern times I think the problem of outcomes is a better way of stating the issue thamn collapse wjich has connotations I don't think the formalism implies.
vanhees71 said:Ok, then what's in your view the difference between Bayesian and frequentist interpretations of probabilities, particularly the statement probabilities make sense for a single event?
E.g., when they say in the weather forecast, there's a 99% probability to have snow tomorrow, and tomorrow it doesn't snow. Does that tell you anything about the validity of the probability given by the forecast? I don't think so.
It's just a probability based on experience (i.e., the collection of many weather data over a long period) and weather models based on very fancy hydrodynamics on big computers. The probabilistic statement can only be checked by evaluating a lot of data based on weather observations.
Of course, there's Bayes's theorem on conditional probabilities, which has nothing to do with interpretations or statistics but is a theorem that can be proven within the standard axiom system by Kolmogorov:
$$P(A|B) P(B) = P(B|A) P(A),$$
which is of course not the matter of any debate.
I'm really unable to understand why there is such a hype about Qbism,
vanhees71 said:This is a contradiction in itself: If you assume a relativistic QFT to describe nature, by construction all measurable (physical) predictions are Poincare covariant, i.e., there's no way to distinguish one inertial frame from another by doing experiments within quantum theory. As Gaasbeek writes already in the abstract: The delayed-choice experiments can be described by standard quantum optics. Quantum optics is just an effective theory describing the behavior of the quantized electromagnetic field in interaction with macroscopic optical apparati in accordance with QED, the paradigmatic example of a relativistic QFT, and as such is Poincare covariant in the prediction about the observable outcomes, and quantum optics indeed is among the most precisely understood fields of relativistic quantum theory: All predictions are confirmed by high-accuracy experiments. So quantum theory cannot reintroduce an "aether" or however you like to call a "preferred reference frame" into physics! By construction QED and thus also quantum optics fulfills relativistic causality constraints too!
JPBenowitz said:If the wave function is a physical object, then is Hilbert Space a physical space?
JPBenowitz said:If the wave function is a physical object, then is Hilbert Space a physical space? In other words if the wave function is a physical object then would this necessitate that quantum spacetime is an infinite dimensional complex vector space?
Maybe this discussion about frequentism versus Bayesianism can shed some light on the parallel discussion about collapse. Following the quoted post logic we could make compatible both absence and presence of collapse as two ways of introducing irreversibility(i.e. entropy thru probability and preparation or thru measurement-collapse) in the quantum theory, two ways of contemplating how probabilities are updated by measurements. Probably collapse is a rougher way of viewing it but it is a matter of taste. It all amounts to the same QM.stevendaryl said:You can go one better: Bayesian statistics allows us to have a probability for something with zero events. Of course, in that case, it's just a guess (although you can have a principled way of making such guesses). A single event provides a correction to your guess. More events provide better correction.
It doesn't tell you a lot, but it tells you something. If the forecast is for 99% chance of snow, and it doesn't snow, then (for a Bayesian), the confidence that the forecast is accurate will decline slightly. If for 100 days in a row, the weather service predicts 99% chance of snow, and it doesn't snow any of those days, then for the Bayesian, the confidence that the reports are accurate will decline smoothly each time. It would never decline to zero, because there's always a nonzero chance that that an accurate probabilistic prediction is wrong 100 times in a row, just like there is a nonzero chance that a fair coin will yield heads 100 times in a row.
The frequentist would (presumably) have some cutoff value for significance. The first few times that the weather report proves wrong, they would say that no conclusion can be drawn, since the sample size was so small. Then at some point, he would conclude that he had a large enough sample to make a decision, and would decide that the reports are wrong.
Note that both the Bayesian and the frequentist makes use of arbitrary parameters--the Bayesian has an arbitrary a priori notion of probability of events. The frequentist has an arbitrary cutoff for determining significance. The difference is that the Bayesian smoothly takes into account new data, while the frequentist withholds any judgement until some threshold amount of data, then makes a discontinuous decision.
Bayes' formula is of course valid whether you are a Bayesian or a frequentist, but the difference is that the Bayesian associates probabilities with events that have never happened before, and so can make sense of any amount of data. So for the example we're discussing, there would be an a priori probability of snow, and an a priori probability of the weather forecaster being correct. With each day that the forecaster makes a prediction, and each day that it does or does not snow, those two probabilities are adjusted based on the data, according to Bayes' formula.
So Bayes' formula, together with a priori values for probabilities, allows the bayesian to make probabilistic statements based on whatever data is available.
Well, I'm not defending Qbism. I was just talking about bayesian versus frequentist views of probability. As I said previous, I don't think that Qbism gives any new insight into the meaning of quantum mechanics, whether or not you believe in bayesian probability.
We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR. As far as I remember, Weinberg is undecided about the interpretation at the end of his very nice chapter on the issue. I think that the minimal statistical interpretation is everything we need to apply quantum theory to observable phenomena. Another question is whether you consider QT as a "complete theory". This was the main question, particularly Heisenberg was concerned about, and this gave rise to the Copenhagen doctrine, but as we see in our debates here on the forum, there's not even a clear definition, what the Copenhagen interpretation might be. That's why I prefer to label the interpretation I follow as the "minimal statistical interpretation". I think it's very close to the flavor of Copenhagen due to Bohr, although I'm not sure about what Bohr thinks with regard to the collapse. I don't agree with his hypothesis that there must be a "cut" between quantum and classical dynamics, because it cannot be defined. Classical behavior occurs due to decoherence and the necessity of coarse graining in defining relevant "macroscopic" observables but not from a cut at which quantum theory becomes invalid and classical dynamics takes over.atyy said:Hmmm, are we still disagreeing on this? Collapse is in Landau and Lifshitz, Cohen-Tannoudji, Diu and Laloe, Sakurai and Weinberg (and every other major text except Ballentine, whom I'm sure is wrong), so it really is quantum mechanics. To see that it is essential, take an EPR experiment in which Alice and Bob measure simultaneously. What is simultaneous in one frame will be sequential in another frame. As long as one has sequential measurements in which sub-ensembles are selected based on the measurement outcome, one needs collapse or an equivalent postulate.
vanhees71 said:We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR. As far as I remember, Weinberg is undecided about the interpretation at the end of his very nice chapter on the issue. I think that the minimal statistical interpretation is everything we need to apply quantum theory to observable phenomena. Another question is whether you consider QT as a "complete theory". This was the main question, particularly Heisenberg was concerned about, and this gave rise to the Copenhagen doctrine, but as we see in our debates here on the forum, there's not even a clear definition, what the Copenhagen interpretation might be. That's why I prefer to label the interpretation I follow as the "minimal statistical interpretation". I think it's very close to the flavor of Copenhagen due to Bohr, although I'm not sure about what Bohr thinks with regard to the collapse. I don't agree with his hypothesis that there must be a "cut" between quantum and classical dynamics, because it cannot be defined. Classical behavior occurs due to decoherence and the necessity of coarse graining in defining relevant "macroscopic" observables but not from a cut at which quantum theory becomes invalid and classical dynamics takes over.
vanhees71 said:The "collapse" to my understanding is just the trivial thing that after I take notice of the result of a random experiment that then for this instance the before undetermined or unknown feature is decided. There's nothing happening in a physical sense. Nowadays most experiments take data, store them in a big computer file and then evaluate these outcomes much later. Would you say there's a collapse acting on things that are long gone, only because somebody makes some manipulation of data on a storage medium? Or has the collapse occurred when the readout electronics have provided the signal to be written on that medium? Again, I don't think that the collapse is necessary to use quantum theory as a probabilistic statement about the outcome of measurements with a given preparation (state) of the system.
vanhees71 said:We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR.
vanhees71 said:Nowadays most experiments take data, store them in a big computer file and then evaluate these outcomes much later. Would you say there's a collapse acting on things that are long gone, only because somebody makes some manipulation of data on a storage medium? Or has the collapse occurred when the readout electronics have provided the signal to be written on that medium? Again, I don't think that the collapse is necessary to use quantum theory as a probabilistic statement about the outcome of measurements with a given preparation (state) of the system.
Still there is no argument given, why you need the collapse. I don't understand, why one needs one within the minimal statistical interpretation. In no experiment, I'm aware of I need a collapse to use quantum theory to understand its outcome!atyy said:Weinberg is undecided about interpretation, and it is true that one can do without collapse provided one does not use Copenhagen or a correct version of the minimal statistical interpretation. For example, one can use the Bohmian interpretation, or try to use a Many-Worlds interpretation, both of which have no collapse. But it is not possible to use Copenhagen or a correct version of the minimal statistical interpretation without collapse (or equivalent assumption such as the equivalence of proper and improper mixtures). This is why most major texts (except Ballentine's erroneous chapter 9) include collapse, because the default interpretation is Copenhagen or the minimal statistical interpretation.
Peres argues that one can remove the cut and use coarse graining, but Peres is wrong because the coarse-grained theory in which the classical/quantum cut appears to be emergent yields predictions, but the fine grained theory does not make any predictions. So the coarse graining that Peres mentions introduces the classical/quantum cut in disguise. It is important that the cut does not say that we cannot enlarge the quantum domain and treat the classical apparatus in a quantum way. What the cut says is that if we do that, we need yet another classical apparatus in order for quantum theory to yield predictions.
Another way to see that the minimal statistical interpretation must have a classical/quantum cut and collaspe (or equivalent postulates) is that a minimal interpretation without these elements would solve the measurement problem, contrary to consensus that a minimal interpretation does not solve it.
Collapse occurs immediately after the measurement. In a Bell test, the measurements are time stamped, so if you accept the time stamp, you accept that that is when the measurement happens, and not later after post-processing. It is ok not to accept the time stamp, because measurement is a subjective process. However, in such a case, there is no violation of the Bell inequalities at spacelike separation. If one accepts that quantum mechanics predicts a violation of the Bell inequalities at spacelike separation, then one does use the collapse postulate. It is important that at this stage we are not committing to collapse as a physical process, and leaving it open that it could be epistemic.
vanhees71 said:That's a good point: The state preparation in, e.g., a Stern-Gerlach experiments is through a von-Neumann filter measurement. You let run the particle through and inhomogeneous magnetic field, and this sorts the particles into regions of different ##\sigma_z## components (where ##z## is the direction of the homogeneous piece of the magnetic field). Then we block out all particles, not within the region of the desired value of ##\sigma_z##.
Microscopically the shielding works simply as absorbers of the unwanted particles. One can see that there is no spontaneous collapse but simply local interactions of the particles with the shielding absorbing them and leaving the "wanted ones" through, because they are in a region, where there is no shielding. The absorption process is of course highly decoherent, it's described by local interactions and quantum dynamics. No extra "cut" or "collapse" needed.
vanhees71 said:Where do you need a collapse here either? A+B use a polarization foil and photon detectors to figure out whether their respective photon run through the polarization foil or not, which practically ideally let's through only photons with a determined linear-polarization state; the other photons are absorbed, which is through local interactions of the respective photon with the foil and there is no long-distance interaction between A's foil with B's photon and vice versa. So there cannot be any collapse as in the Copenhagen interpretation (Heisenberg flavor I think?). So there cannot be a collapse at the level of the polarizers. The same argument holds for the photo detectors. Also note that the time stamps are accurate but always of finite resolution, i.e., the registration of a photon is a fast but not instantaneous process. On a macroscopic scale of resoulution, it's of course a "sharp time stamp". The photo detector is applicable for these experiments if the accuracy of the time-stamps is sufficient to unanimously ensure that you can relate the entangled photon pairs. For a long enough distance between the photon source and A's and B's detectors and low enough photon rates, that's no problem. Again, nowhere do I need a collapse.
vanhees71 said:Bohr was of course right in saying, that finally we deal with macroscopic preparation/measurement instruments, but in my opinion he was wrong that one needs a cut between quantum and classical dynamics anywhere, because the classical behavior of macroscopic objects are (at least FAPP :) ) an emergent phenomenon and clearly understandable via coarse graining.
I also must admit that I consider Asher Peres's book as one of the best, when it comes to the foundational questions of quantum theory. Alone his definition of quantum states as preparation procedures eliminate a lot of esoterics often invoked to solve the "measurement problem". FAPP there is no measurement problem as the successful description of even the "weirdest" quantum behavior of nature shows!
atyy said:Let's start with particles in a Bell state. Do the particles remain entangled after A has made a measurement?.
bhobba said:No - it's now entangled with the measurement apparatus. But I don't think that's what is meant by collapse.
atyy said:At some point you will invoke that an improper mixture becomes a proper mixture. When you do that, you are using collapse.
bhobba said:In the ensemble interpretation that is subsumed in the assumption an observation selects an outcome from a conceptual ensemble. Collapse is bypassed.
atyy said:If you have a conceptual ensemble, that is conceptual hidden variables.
No, they are disentangled due to the (local!) interaction of A's photon with the polarizer and photon detector. Usually it gets absorbed by the latter, and there's only B's photon left as long as his is not absorbed by his detector either.atyy said:Let's start with particles in a Bell state. Do the particles remain entangled after A has made a measurement?
If you define this as cut, it's fine with me, but this doesn't say that there is a disinguished classical dynamics in addition to quantum dynamics.If you use FAPP, then you do use a cut. The whole point of the cut and collapse is FAPP. Removing the cut and collapse are not FAPP, and would solve the measurement problem.
bhobba said:Its exactly the same ensemble used in probability. I think you would get a strange look from a probability professor if you claimed such a pictorial aid was a hidden variable.
bhobba said:Atty I think we need to be precise what is meant by collapse. Can you describe in your own words what you think collapse is?
My view is its the idea observation instantaneously changes a quantum state in opposition to unitary evolution. Certainly it changes in filtering type observations - but instantaneously - to me that's the rub. It changed because you have prepared the system differently but not by some mystical non local instantaneous 'collapse' - if you have states - you have different preparations - its that easy.
bhobba said:Added Later:
As the Wikipedia artice says:
On the other hand, the collapse is considered a redundant or optional approximation in:
the Consistent histories approach, self-dubbed "Copenhagen done right"
the Bohm interpretation
the Many-worlds interpretation
the Ensemble Interpretation
IMHO it's redundant in the above.
vanhees71 said:No, they are disentangled due to the (local!) interaction of A's photon with the polarizer and photon detector. Usually it gets absorbed by the latter, and there's only B's photon left as long as his is not absorbed by his detector either.
The shortest summary of the EPR paper that I've read and makes sense to me can be summarized in 2 sentences:vanhees71 said:My interpretation of the EPR paper is that they have not critizized quantum theory as such but only the Copenhagen flavor with the collpase of it.
bhobba said:Its exactly the same ensemble used in probability. I think you would get a strange look from a probability professor if you claimed such a pictorial aid was a hidden variable.
bohm2 said:The shortest summary of the EPR paper that I've read and makes sense to me can be summarized in 2 sentences:
1. Either QM is incomplete or if it's complete, it must be nonlocal.
2. Nonlocality is unreasonable, therefore it is incomplete.
vanhees71 said:FAPP yes. In reality it's of course way more complicated. You have a system consisting of the BaO crystal, a laser, the entangled two-photon Fock state (wave packets!) as well as polarization foils and photon detectors at Alice's and Bob's place. I guess that should roughly be the relevant setup.
The time evolution of this whole setup is described by the unitary time evolution of quantum theory.