Is the collapse indispensable?

In summary, the concept of collapse being indispensable is a complex and debated topic. Some argue that collapse is necessary for growth and progress, while others believe it is a sign of failure and should be avoided. Ultimately, the answer to whether collapse is indispensable depends on individual perspectives and the specific situation at hand.
  • #141
atyy said:
Regardless, one cannot do away with collapse as a postulate and replace it with nothing. For example, if collapse is taken away as a postulate, one possible replacement is the postulate that an improper mixture is proper.

I don't want to argue about this. My take is as I said very early on. We have interpretations where no collapse occurs eg MW, and those where it for sure happens eg GRW. Both explain how an improper mixture becomes a proper one so collapse is not a necessary part. As I said I don't want to argue the point - so that's all I will say.

Thanks
Bill
 
Physics news on Phys.org
  • #142
atyy said:
Challenge: Derive the generalized Born rule from the Born rule, but without using collapse!
Lemma (conditional probability) For commuting projections ##P_A## and ##P_B##, the conditional probability ##P(B|A)## in the state ##\rho## is given by
##P(B|A) = \mathrm{Tr}\left(\frac{P_A \rho P_A}{\mathrm{Tr}(P_A \rho P_A)} P_B\right) = \frac{\mathrm{Tr}\left(P_B P_A \rho P_A P_B\right)}{\mathrm{Tr}(P_A \rho P_A)}##.
Proof. The conditional probability is defined by ##P(B|A) = \frac{P(A\wedge B)}{P(A)}##. The projection operator for ##A\wedge B## is given by ##P_{A\wedge B}=P_A P_B##. The Born rule tells us that ##P(A)=\mathrm{Tr}(\rho P_A)## and
##P(A\wedge B) = \mathrm{Tr}(\rho P_{A\wedge B}) = \mathrm{Tr}(\rho P_A P_B) = \mathrm{Tr}(\rho P_A^2 P_B) = \mathrm{Tr} (\rho P_A P_B P_A) = \mathrm{Tr}(P_A \rho P_A P_B) = \mathrm{Tr}(P_B P_A \rho P_A P_B)##. Now use linearity of the trace to get the formula for the conditional probability.

Now we want to apply this formula to sequential measurements. Let's say we want the probability to find observable ##Y## in the set ##O_{Y}## at time ##t_2## after having found observable ##X## in the set ##O_{X}## at time ##t_1 < t_2##. This corresponds to measuring the Heisenberg observables ##X(t_1)## and ##Y(t_2)## in the same sets. Let ##\pi_X## and ##\pi_Y## be the projection valued measures of ##X## and ##Y##. The corresponding projection valued measures of ##X(t_1)## and ##Y(t_2)## are given by ##U(t_1)^\dagger \pi_X U(t_1)## and ##U(t_2)^\dagger \pi_Y U(t_2)##. Thus we are interested in the projections ##P_A = U(t_1)^\dagger \pi_X(O_X) U(t_1)## and ##P_B = U(t_2)^\dagger \pi_Y(O_Y) U(t_2)##. We assume that these operators commute, which is true up to arbitrarily small corrections for instance for filtering type measurements or after a sufficient amount of decoherence has occured. Thus we can apply the lemma and get:
##P(Y(t_2)\in O_Y | X(t_1) \in O_X) = \frac{\mathrm{Tr}(P_B P_A \rho P_A P_B)}{\mathrm{Tr}(P_A \rho P_A)} = \frac{\mathrm{Tr}(\pi_Y(O_Y) U(t_2-t_1)^\dagger \pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X) U(t_2-t_1) \pi_Y(O_Y))}{\mathrm{Tr}(\pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X))}##
This is the right formula and we only assumed the Born rule and decoherence/filtering measurements.
 
  • Like
Likes naima
  • #143
rubi said:
Lemma (conditional probability) For commuting projections ##P_A## and ##P_B##, the conditional probability ##P(B|A)## in the state ##\rho## is given by
##P(B|A) = \mathrm{Tr}\left(\frac{P_A \rho P_A}{\mathrm{Tr}(P_A \rho P_A)} P_B\right) = \frac{\mathrm{Tr}\left(P_B P_A \rho P_A P_B\right)}{\mathrm{Tr}(P_A \rho P_A)}##.
Proof. The conditional probability is defined by ##P(B|A) = \frac{P(A\wedge B)}{P(A)}##. The projection operator for ##A\wedge B## is given by ##P_{A\wedge B}=P_A P_B##. The Born rule tells us that ##P(A)=\mathrm{Tr}(\rho P_A)## and
##P(A\wedge B) = \mathrm{Tr}(\rho P_{A\wedge B}) = \mathrm{Tr}(\rho P_A P_B) = \mathrm{Tr}(\rho P_A^2 P_B) = \mathrm{Tr} (\rho P_A P_B P_A) = \mathrm{Tr}(P_A \rho P_A P_B) = \mathrm{Tr}(P_B P_A \rho P_A P_B)##. Now use linearity of the trace to get the formula for the conditional probability.

Now we want to apply this formula to sequential measurements. Let's say we want the probability to find observable ##Y## in the set ##O_{Y}## at time ##t_2## after having found observable ##X## in the set ##O_{X}## at time ##t_1 < t_2##. This corresponds to measuring the Heisenberg observables ##X(t_1)## and ##Y(t_2)## in the same sets. Let ##\pi_X## and ##\pi_Y## be the projection valued measures of ##X## and ##Y##. The corresponding projection valued measures of ##X(t_1)## and ##Y(t_2)## are given by ##U(t_1)^\dagger \pi_X U(t_1)## and ##U(t_2)^\dagger \pi_Y U(t_2)##. Thus we are interested in the projections ##P_A = U(t_1)^\dagger \pi_X(O_X) U(t_1)## and ##P_B = U(t_2)^\dagger \pi_Y(O_Y) U(t_2)##. We assume that these operators commute, which is true up to arbitrarily small corrections for instance for filtering type measurements or after a sufficient amount of decoherence has occured. Thus we can apply the lemma and get:
##P(Y(t_2)\in O_Y | X(t_1) \in O_X) = \frac{\mathrm{Tr}(P_B P_A \rho P_A P_B)}{\mathrm{Tr}(P_A \rho P_A)} = \frac{\mathrm{Tr}(\pi_Y(O_Y) U(t_2-t_1)^\dagger \pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X) U(t_2-t_1) \pi_Y(O_Y))}{\mathrm{Tr}(\pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X))}##
This is the right formula and we only assumed the Born rule and decoherence/filtering measurements.

How about non-commuting operators?
 
  • #144
atyy said:
How about non-commuting operators?
The operators ##X## and ##Y## don't need to commute, as long as ##X(t_1)## and ##Y(t_2)## do. This is exactly what decoherence causes to high precision (environmental superselection). If it weren't the case, the event ##A\wedge B## would be ill-defined, since there is no orthogonal projection that corresponds to it.

Edit: To be more clear: ##X## and ##Y## are not directly the operators that measure the particle/..., but rather the operators that measure the position of the pointers of the measurement devices. If they didn't commute, it would mean that we couldn't read off the positions of the pointers simultaneously, i.e. the measurement devices would be in some macroscopic superposition.

Edit2: If we restrict X and Y to pointer observables, we don't even need to invoke decoherence here. A good measurement device is approximately classical, i.e. its pointer observable must commute with the pointer observables of other classical measurement devices up to at most tiny corrections. Otherwise, they would themselves be quantum objects and exhibit quantum behavior. For example, the position and momentum of a particle can never be known up to arbitrary precision, but the locations of the pointers of the position and momentum measurement devices can still be known exactly (since the measurement devices are assumed to be classical objects). Hence, they are supposed to commute. Non-commuting observables don't qualify as observables corresponding to pointers of classical measurement devices.
 
Last edited:
  • Like
Likes atyy
  • #145
rubi said:
The operators ##X## and ##Y## don't need to commute, as long as ##X(t_1)## and ##Y(t_2)## do. This is exactly what decoherence causes to high precision (environmental superselection). If it weren't the case, the event ##A\wedge B## would be ill-defined, since there is no orthogonal projection that corresponds to it.

Edit: To be more clear: ##X## and ##Y## are not directly the operators that measure the particle/..., but rather the operators that measure the position of the pointers of the measurement devices. If they didn't commute, it would mean that we couldn't read off the positions of the pointers simultaneously, i.e. the measurement devices would be in some macroscopic superposition.

Edit2: If we restrict X and Y to pointer observables, we don't even need to invoke decoherence here. A good measurement device is approximately classical, i.e. its pointer observable must commute with the pointer observables of other classical measurement devices up to at most tiny corrections. Otherwise, they would themselves be quantum objects and exhibit quantum behavior. For example, the position and momentum of a particle can never be known up to arbitrary precision, but the locations of the pointers of the position and momentum measurement devices can still be known exactly (since the measurement devices are assumed to be classical objects). Hence, they are supposed to commute. Non-commuting observables don't qualify as observables corresponding to pointers of classical measurement devices.

That should probably work, because it is a variation of two traditional ways of avoiding collapse that do work

(1) Use the deferred measurement principle
(2) Restrict to commuting observables.

The two things are related because the deferred measurement principle changes sequential measurement of non-commuting observables to simultaneous measurement of commuting observables.
 
  • #146
Here's a sense in which collapse is unnecessary. Suppose we formalize the notion of the "macroscopic state" of the universe at a particular moment. It might, for instance, be a coarse-grained description of the mass-energy-momentum density, the spin density, the charge/current density, the values of various fields, etc. But only to a certain level of accuracy (the level of accuracy can be chosen keeping the uncertainty principle in mind). Then QM can be used to compute probabilities for macroscopic histories (or more accurately, the conditional probability that an initial history up to a specific moment will evolve into a certain more complete history). My conjecture is that in theory, it is unnecessary to invoke wave function collapse to compute these probabilities. The collapse, in this view, would come in as a short-cut, or approximation, to computing these probabilities, ignoring interference terms between macroscopically distinguishable intermediate states.
 
  • #147
atyy said:
That should probably work, because it is a variation of two traditional ways of avoiding collapse that do work

(1) Use the deferred measurement principle
(2) Restrict to commuting observables.

The two things are related because the deferred measurement principle changes sequential measurement of non-commuting observables to simultaneous measurement of commuting observables.
It shows that every quantum theory that requires collapse can be converted into one that evolves purely unitarily and makes the same predictions. Here is the recipe:
We start with a Hilbert space ##\mathcal H##, a unitary time evolution ##U(t)## and a set of (possibly non-commuting) observables ##(X_i)_{i=1}^n##. We define the Hilbert space ##\hat{\mathcal H} = \mathcal H\otimes\underbrace{\mathcal H \otimes \cdots \mathcal H}_{n \,\text{times}}##. We define the time evolution ##\hat U(t) \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = (U(t)\psi)\otimes\phi_1\otimes\cdots\otimes\phi_n## and the pointer observables ##\hat X_i \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = \psi\otimes\phi_1\otimes\cdots\otimes (X_i\phi_i)\otimes\cdots\otimes\phi_n##. First, we note that ##\left[\hat X_i,\hat X_j\right]=0##, so we can apply the previous result. Now, for every observable ##X_i## with ##X_i\xi_{i k} = \lambda_{i k}\xi_{i k}## (I assume discrete spectrum here, so I don't have to dive into direct integrals), we introduce the unitary von Neumann measurements ##U_i \left(\sum_k c_k\xi_{i k}\right)\otimes\phi_1\otimes\cdots\otimes\phi_n = \sum_k c_k \xi_{i k} \otimes\phi_1\otimes\cdots\otimes \xi_{i k} \otimes\cdots\otimes\phi_n##. Whenever a measurement of an observable ##X_i## is performed, we apply the corresponding unitary operator ##U_i## to the state. Thus, all time evolutions are given by unitary operators (either ##\hat U(t)## or ##U_i##) and thus the whole system evolves unitarily. Moreover, all predictions of QM with collapse, including joint and conditional probabilities, are reproduced exactly, without ever having to use the collapse postulate.

Of course, this is the least realistic model of measurement devices possible, but one can always put more effort in better models.
 
  • Like
Likes atyy and Mentz114
  • #148
rubi said:
It shows that every quantum theory that requires collapse can be converted into one that evolves purely unitarily and makes the same predictions. Here is the recipe:
We start with a Hilbert space ##\mathcal H##, a unitary time evolution ##U(t)## and a set of (possibly non-commuting) observables ##(X_i)_{i=1}^n##. We define the Hilbert space ##\hat{\mathcal H} = \mathcal H\otimes\underbrace{\mathcal H \otimes \cdots \mathcal H}_{n \,\text{times}}##. We define the time evolution ##\hat U(t) \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = (U(t)\psi)\otimes\phi_1\otimes\cdots\otimes\phi_n## and the pointer observables ##\hat X_i \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = \psi\otimes\phi_1\otimes\cdots\otimes (X_i\phi_i)\otimes\cdots\otimes\phi_n##. First, we note that ##\left[\hat X_i,\hat X_j\right]=0##, so we can apply the previous result. Now, for every observable ##X_i## with ##X_i\xi_{i k} = \lambda_{i k}\xi_{i k}## (I assume discrete spectrum here, so I don't have to dive into direct integrals), we introduce the unitary von Neumann measurements ##U_i \left(\sum_k c_k\xi_{i k}\right)\otimes\phi_1\otimes\cdots\otimes\phi_n = \sum_k c_k \xi_{i k} \otimes\phi_1\otimes\cdots\otimes \xi_{i k} \otimes\cdots\otimes\phi_n##. Whenever a measurement of an observable ##X_i## is performed, we apply the corresponding unitary operator ##U_i## to the state. Thus, all time evolutions are given by unitary operators (either ##\hat U(t)## or ##U_i##) and thus the whole system evolves unitarily. Moreover, all predictions of QM with collapse, including joint and conditional probabilities, are reproduced exactly, without ever having to use the collapse postulate.

Of course, this is the least realistic model of measurement devices possible, but one can always put more effort in better models.

Yes. The argument you have given is a generalization of using the deferred measurement principle to avoid collapse, which we have already agreed works. I have no problem with deriving the generalized Born rule without collapse for commuting observables. Ballentine does that, and I believe his argument is fine. The part of his argument I did not buy was his attempt to extend the argument to non-commuting observables. The important part of your argument is to circumvent the need for sequential measurement of non-commuting operators, which is fine, but it should be stated as a non-standard assumption.

Also, it does not support the idea that there is a wave function of the universe the evolves unitarily without collapse, because one still needs something in addition to the wave function, eg. the classical apparatus.
 
Last edited:
  • #149
atyy said:
To get the definite outcome, one must further assume that the improper mixture is converted to a proper mixture, which is the same as assuming collapse. Ballentine and Peres are probably missing this assumption in their erroneous books.
A. Neumaier said:
I had already asked you to specify in detail your understanding of these terms and your reasoning for the conclusion, so that it can be critically discussed. Simply repeating this statement in a black box fashion as the sole justification for accusing a respectable author of making a fundamental error is not helpful at all.
atyy said:
I am not an outsider. I am stating that the standard texts are right.
Ballentine and Peres are the outsiders.
You are an outsider - where is your record of publications in the foundations of quantum mechanics? Or at least you are using a pseudonym so that you appear to be an outsider. This in itself would not be problematic. But you are making erroneous accusations based on a lack of sufficient understanding. This is very problematic.
atyy said:
Let's consider a system of one spin.

A pure state means that we have prepared many copies of the system of one spin, and each copy of the system is in the same pure state. For example, every copy of the single spin is pointing up.

A proper mixed state means that we have prepared many copies of the system of one spin, and each copy of the system is in a different pure state. For example, some copies of the single spin are pointing up and some are pointing down.

To illustrate an improper mixture, we have to consider a system of two spins. The system is in a pure state, which means we have many copies of the system of two spins, and each copy of the system of two spins is in the same pure state. If the pure state is one in which the two spins are entangled, then the state of a subsystem of one spin is an improper mixed state. We call the state of the subsystem a mixed state, because the subsystem in every copy behaves as if it is a proper mixed state. However, it is an improper mixed state, because if we consider the system of two spins, it is pure. So while we cannot distinguish between the proper and improper mixed state by looking at a subsystem of one spin, we can if we look at the entire system of two spins.
These explanations are valid for the Copenhagen interpretation but are meaningless in the context of the minimal (statistical) interpretation. In the minimal (statistical) interpretation discussed by Ballentine and Peres, a single system has no associated state at all. Thus your statements ''each copy of the system is in the same [resp. a different] pure state'' do not apply in their interpretation. You are seeing errors in their book only because you project your own Copenhagen-like interpretation (where a single system has a state) into a different interpretation that explicitly denies this. If a single system has no state, there is nothing that could collapse, hence there is no collapse. Upon projecting away one of the spins, an ensemble of 2-spin system in an entangled pure state automatically is an ensemble in a mixed stated of the subsystem, without anything mysterious having to be in between. Looking at conditional expectations is all that is needed to verify this. No collapse is needed.

Thus not Ballentine and Peres but your understanding of their exposition is faulty. You should apologize for having discredited highly respectable experts on the foundations of quantum mechanics on insufficient grounds.

 
  • #150
rubi said:
It shows that every quantum theory that requires collapse can be converted into one that evolves purely unitarily and makes the same predictions. Here is the recipe:
We start with a Hilbert space ##\mathcal H##, a unitary time evolution ##U(t)## and a set of (possibly non-commuting) observables ##(X_i)_{i=1}^n##. We define the Hilbert space ##\hat{\mathcal H} = \mathcal H\otimes\underbrace{\mathcal H \otimes \cdots \mathcal H}_{n \,\text{times}}##. We define the time evolution ##\hat U(t) \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = (U(t)\psi)\otimes\phi_1\otimes\cdots\otimes\phi_n## and the pointer observables ##\hat X_i \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = \psi\otimes\phi_1\otimes\cdots\otimes (X_i\phi_i)\otimes\cdots\otimes\phi_n##. First, we note that ##\left[\hat X_i,\hat X_j\right]=0##, so we can apply the previous result. Now, for every observable ##X_i## with ##X_i\xi_{i k} = \lambda_{i k}\xi_{i k}## (I assume discrete spectrum here, so I don't have to dive into direct integrals), we introduce the unitary von Neumann measurements ##U_i \left(\sum_k c_k\xi_{i k}\right)\otimes\phi_1\otimes\cdots\otimes\phi_n = \sum_k c_k \xi_{i k} \otimes\phi_1\otimes\cdots\otimes \xi_{i k} \otimes\cdots\otimes\phi_n##. Whenever a measurement of an observable ##X_i## is performed, we apply the corresponding unitary operator ##U_i## to the state. Thus, all time evolutions are given by unitary operators (either ##\hat U(t)## or ##U_i##) and thus the whole system evolves unitarily. Moreover, all predictions of QM with collapse, including joint and conditional probabilities, are reproduced exactly, without ever having to use the collapse postulate.

Of course, this is the least realistic model of measurement devices possible, but one can always put more effort in better models.

I replied to this two posts up (#148). But now I am unsure whether you are right about needing only unitary evolution.

First let me say what I do agree with
1) The generalized Born rule can be derived for commuting observables without assuming collapse
2) By using measuring ancilla, one can replace non-commuting observables with commuting observables, at least for the purposes of deriving their joint probability distributions.

But can one really do without collapse? The reason I am doubtful is that at each measurement in the Schroedinger picture, the rule that is used is the Born rule, not the generalized Born rule. So by using successive applications of the Born rule, one does not get the joint probability, which the generalized Born rule gives. So rather I would say that although the generalized Born rule can be derived without collapse as a postulate, the generalized Born rule implies collapse.
 
  • #151
A. Neumaier said:
You are an outsider - where is your record of publications in the foundations of quantum mechanics? Or at least you are using a pseudonym so that you appear to be an outsider. This in itself would not be problematic. But you are making erroneous accusations based on a lack of sufficient understanding. This is very problematic.

These explanations are valid for the Copenhagen interpretation but are meaningless in the context of the minimal (statistical) interpretation. In the minimal (statistical) interpretation discussed by Ballentine and Peres, a single system has no associated state at all. Thus your statements ''each copy of the system is in the same [resp. a different] pure state'' do not apply in their interpretation. You are seeing errors in their book only because you project your own Copenhagen-like interpretation (where a single system has a state) into a different interpretation that explicitly denies this. If a single system has no state, there is nothing that could collapse, hence there is no collapse. Upon projecting away one of the spins, an ensemble of 2-spin system in an entangled pure state automatically is an ensemble in a mixed stated of the subsystem, without anything mysterious having to be in between. Looking at conditional expectations is all that is needed to verify this. No collapse is needed.

Thus not Ballentine and Peres but your understanding of their exposition is faulty. You should apologize for having discredited highly respectable experts on the foundations of quantum mechanics on insufficient grounds.

Well, this point of view is also a bit dangerous, because what's meant by an ensemble is that you can prepare each single member of the ensemble in a well-defined way, which finally defines the idea of "state".

In the formalism, a state is just a self-adjoint trace-class 1 operator, but that's an empty phrase from the physics point of view, because physics is about real things in the lab, and thus it must be possible to define a state in an operational way for a single object, and in this sense the question of the collapse is of some importance, i.e., how can you make sure that you perpare a real-world system in a state which is described by the abstract statistical operator.

I take a pragmatic view on this: A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum. Accelerator physicists construct their devices without much use of quantum theory as far as I know, but they use the classical description of the motion of charged particles in the classical electromagnetic fields designed to achieve a particle beam of high quality (i.e., high luminosity with a pretty well defined momentum).

Also in the usually in textbooks discussed preparations in the sense of idealized von Neumann filter measurements, you can understand them in a very pragmatic way. Take the Stern-Gerlach experiment as an example. This can be fully treated quantum mechanically (although it's usually not done in the usual textbooks; for a very simple introduction, you can have a look at my QM 2 manuscript (in German) [1]). Then you have a rather well-defined spin-position entangled state with practically separated partial beams of definite spin (determined spin-z component). Then you simple block all unwanted partial beams by putting up some absorber material in the way at the corresponding position. What's left is then by construction a beam of well-defined ##\sigma_z## eigenstates.

Last but no least you have to check such claims empirically, i.e., you have to make a sufficient set of measurements to make sure that you have really prepared the state at sufficient accuracy you want.

Now comes the dilemma: We all tend to think in the naive collapse way when considering such filter measurements, assuming that the naive pragmatic way of filtering away the unwanted beams really does prepare the left beam in the way we think. This means that we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture). On the other hand, if we believe that relativistic quantum field theory provides the correct description, there's no action at a distance as implicitly assumed in the collapse hypothesis but only local interactions of the particles with all the elements of the preparation apparatus, including the "beam dumps" filtering away the unwanted beams. So you can take the collapse as a short-hand description of the preparation procedure, but not literally as something happening to the real-world entities (or ensembles of so prepared entities) without getting into a fundamental contradiction with the very foundations of local relativistic QT.

It's also interesting to see, what experts in this field think. Yesterday we had Anton Zeilinger in our Physics Colloquium, and he gave just a great talk about all his Bell experiments (including one of the recent loophole-free meausurements). In the discussion somebody asked the question about the collapse (so I could ask another question about whether the communication loophole is really closed by using "random number generators" to switch the distant measurements at A's and B's place in a way that no FTL information transfer at the two sites is possible, but that's another story). His answer was very pragmatic too: He took the epistemic point of view of Bohr (he also mentioned Heisenberg, but I'm not sure whether Bohr's and Heisenberg's view on this subject are really the same), i.e., that the quantum formalism is just a way to describe probabilities and that the collapse is indeed nothing else than updating the description due to reading off a measurement result. So at least Zeilinger, who did all these mind-boggling experiments for his whole life has a very down-to-earth no-nonsense view on this issue. I was very satisfied ;-)).
 
  • Like
Likes Mentz114
  • #152
vanhees71 said:
Well, this point of view is also a bit dangerous, because what's meant by an ensemble is that you can prepare each single member of the ensemble in a well-defined way, which finally defines the idea of "state".

In the formalism, a state is just a self-adjoint trace-class 1 operator, but that's an empty phrase from the physics point of view, because physics is about real things in the lab, and thus it must be possible to define a state in an operational way for a single object, and in this sense the question of the collapse is of some importance, i.e., how can you make sure that you perpare a real-world system in a state which is described by the abstract statistical operator.

I take a pragmatic view on this: A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum. Accelerator physicists construct their devices without much use of quantum theory as far as I know, but they use the classical description of the motion of charged particles in the classical electromagnetic fields designed to achieve a particle beam of high quality (i.e., high luminosity with a pretty well defined momentum).

Also in the usually in textbooks discussed preparations in the sense of idealized von Neumann filter measurements, you can understand them in a very pragmatic way. Take the Stern-Gerlach experiment as an example. This can be fully treated quantum mechanically (although it's usually not done in the usual textbooks; for a very simple introduction, you can have a look at my QM 2 manuscript (in German) [1]). Then you have a rather well-defined spin-position entangled state with practically separated partial beams of definite spin (determined spin-z component). Then you simple block all unwanted partial beams by putting up some absorber material in the way at the corresponding position. What's left is then by construction a beam of well-defined ##\sigma_z## eigenstates.

Last but no least you have to check such claims empirically, i.e., you have to make a sufficient set of measurements to make sure that you have really prepared the state at sufficient accuracy you want.

Now comes the dilemma: We all tend to think in the naive collapse way when considering such filter measurements, assuming that the naive pragmatic way of filtering away the unwanted beams really does prepare the left beam in the way we think. This means that we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture). On the other hand, if we believe that relativistic quantum field theory provides the correct description, there's no action at a distance as implicitly assumed in the collapse hypothesis but only local interactions of the particles with all the elements of the preparation apparatus, including the "beam dumps" filtering away the unwanted beams. So you can take the collapse as a short-hand description of the preparation procedure, but not literally as something happening to the real-world entities (or ensembles of so prepared entities) without getting into a fundamental contradiction with the very foundations of local relativistic QT.

It's also interesting to see, what experts in this field think. Yesterday we had Anton Zeilinger in our Physics Colloquium, and he gave just a great talk about all his Bell experiments (including one of the recent loophole-free meausurements). In the discussion somebody asked the question about the collapse (so I could ask another question about whether the communication loophole is really closed by using "random number generators" to switch the distant measurements at A's and B's place in a way that no FTL information transfer at the two sites is possible, but that's another story). His answer was very pragmatic too: He took the epistemic point of view of Bohr (he also mentioned Heisenberg, but I'm not sure whether Bohr's and Heisenberg's view on this subject are really the same), i.e., that the quantum formalism is just a way to describe probabilities and that the collapse is indeed nothing else than updating the description due to reading off a measurement result. So at least Zeilinger, who did all these mind-boggling experiments for his whole life has a very down-to-earth no-nonsense view on this issue. I was very satisfied ;-)).

As usual, you are wrong about the foundations of local relativistic QT. But let's not discuss that further. What I wish to stress here is that I do like the idea that the collapse an updated reading. I have always objected to your use of relativity to say that it must be an undated reading and nothing else. So your quote of Zeilinger does not support your views, since he did not argue his point using relativity.
 
  • #153
I'd be very interested in a clear mathematical statement what's wrong about my view of local relativistic QFT. It's simply the standard-textbook approach, using the microcausality condition for local observables (i.e., the commutation of local (i.e., density) operators at space-like separated distances). You always claim that this standard treatment is wrong, but I've not yet seen a convincing (mathematical) argument against it. Note that the microcausality condition for the energy density operator is even necessary to have a Poincare-covariant S-matrix!
 
  • #154
vanhees71 said:
I'd be very interested in a clear mathematical statement what's wrong about my view of local relativistic QFT. It's simply the standard-textbook approach, using the microcausality condition for local observables (i.e., the commutation of local (i.e., density) operators at space-like separated distances). You always claim that this standard treatment is wrong, but I've not yet seen a convincing (mathematical) argument against it. Note that the microcausality condition for the energy density operator is even necessary to have a Poincare-covariant S-matrix!

The standard treatment is right. However, microcausality does not mean what you think it means. You think that microcausality is classical relativistic causality. But it is not.
 
  • Like
Likes zonde
  • #155
What else should it be? You only repeat the same words without giving the corresponding math to justify this claim.
 
  • #156
vanhees71 said:
What else should it be? You only repeat the same words without giving the corresponding math to justify this claim.

Bell's theorem excludes classical relativistic causality.
 
  • #157
I give up. It really goes in circles :-(.
 
  • #158
vanhees71 said:
A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum.
This means that the state is a property of the accelerator (or the beam rotating in the magnetic loop), while particles can have (loosely, classically speaking) any momentum, just distributed according to a Gaussian (or whatever precisely is prepared). In a minimal statistical interpretation (Ballentine or Peres taken literally), you can assert nothing at all [except the possible values of any set of commuting variables] about the single system, unless the state predicts some property (such as spin or polarization) exactly.
vanhees71 said:
we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture).
This is neither consistent (in a minimal statistical interpretation) nor necessary, since it is untestable. One can test only the behavior of a large number of these systems, and so one only needs to know (or can verify) that the source (preparation) indeed prepares the individual systems in a way that the statistics is satisfied. Thus you need not (and hence should not) assume it.
 
Last edited:
  • #159
vanhees71 said:
I give up. It really goes in circles :-(.

Since this is important. Let me restate what the correct meaning of microcausality is. Microcausality is a sufficient condition to prevent superluminal signalling.

If spacelike observables do not commute, then measuring one will change the probabilities at a distant location, enabling superluminal signalling. So spacelike observables must commute. In the Heisenberg picture, the observables evolve with time. Then the cluster decomposition is a condition that ensures that even under time evolution, spacelike operators continue to commute.

The important point is that "no superluminal signalling" is not the same as "classical relativistic causality".
 
  • #160
atyy said:
The standard treatment is right. However, microcausality does not mean what you think it means.
atyy said:
Let me restate what the correct meaning of microcausality is. Microcausality is a sufficient condition to prevent superluminal signalling.
[Mentor's note: An unnecessary digression has been removed from this post]

The meaning of microcausality is defined in quantum field theory. ''No superluminal signalling'' is not the real meaning of microcausality but only a minor consequence among the many far more important consequences microcausality has, such as cluster decomposition, well-defined S-matrices, etc..
 
Last edited by a moderator:
  • #161
A. Neumaier said:
This means that the state is a property of the accelerator (or the beam rotating in the magnetic loop), while particles can have (loosely, classically speaking) any momentum, just distributed according to a Gaussian (or whatever precisely is prepared). In a minimal statistical interpretation (Ballentine or Peres taken literally), you can assert nothing at all [except the possible values of any set of commuting variables] about the single system, unless the state predicts some property (such as spin or polarization) exactly.
Agreed, but as I tried to say in the quoted posting, for the very definition of the ensemble you need a clear association between what the state, providing probabilistic and only probabilistic meaning about the system, describes and the single member of the ensemble, i.e., the preparation procedure (or an equivalence class of preparation procedures) must define a single system to be prepared in that state, although of course, since there is only probabilistic content in the knowledge of the state, you can test the correctness of the association of the state only by measuring at least a complete set of compatible observables on a repeatedly prepared system, i.e., an ensemble.

This is neither consistent (in a minimal statistical interpretation) nor necessary, since it is untestable. One can test only the behavior of a large number of these systems, and so one only needs to know (or can verify) that the source (preparation) indeed prepares the individual systems in a way that the statistics is satisfied. Thus you should not assume it.
Well, then you cannot even test QT in principle. You must assume that it is possible to prepare an ensemble, i.e., each single system within the ensemble, reproducibly in the state you claim to verify or falsify by making repeated measurements on the system.
 
  • #162
vanhees71 said:
You must assume that it is possible to prepare an ensemble, i.e., each single system within the ensemble
The first half is essential, always assumed, easily verifiable, and amply verified in practice.
The second half is contrary to the idea of an ensemble, and cannot be tested in the quantum domain.
The connecting ''i.e.'' is inconsistent, since it suggests that the second half is just an equivalent interpretation of the first.

In the case of an accelerator, you can at any time take 10000 systems from the rotating stream and verify that it satisfies the statistics. There is no need to assume that each individual is ''the same'' or ''identically prepared'' in any sense. It is just one of the systems prepared in the stream. The stationarity of the stream is the physical equivalent of the ''identically distributed'' assumption in matheamtical statistics.

On the other hand, if you take out just one system and measure a single observable, you get just a random result, about which the preparation predicts nothing (except if the statistics is so sharp that it predicts a single value within the experimental uncertainty). This is the reason one cannot say anything about the single system. This is characteristic of any stochastic (classical or quantum) system. So what should it mean that each single system is prepared in the same way apart from that it is part of the prepared ensemble? It cannot mean anything, so talking as if there were additional meaning is irritating.

Thus if you subscribe to the statistical interpretation you should adapt your language accordingly. Nothing is lost when making the language more precise to better reflect one's interpretation. But a lot of clarity is gained, and people are less likely to misunderstand your position.
 
Last edited:
  • #163
rubi said:
Lemma (conditional probability) For commuting projections ##P_A## and ##P_B##, the conditional probability ##P(B|A)## in the state ##\rho## is given by
##P(B|A) = \mathrm{Tr}\left(\frac{P_A \rho P_A}{\mathrm{Tr}(P_A \rho P_A)} P_B\right) = \frac{\mathrm{Tr}\left(P_B P_A \rho P_A P_B\right)}{\mathrm{Tr}(P_A \rho P_A)}##.
Proof. The conditional probability is defined by ##P(B|A) = \frac{P(A\wedge B)}{P(A)}##. The projection operator for ##A\wedge B## is given by ##P_{A\wedge B}=P_A P_B##. The Born rule tells us that ##P(A)=\mathrm{Tr}(\rho P_A)## and
##P(A\wedge B) = \mathrm{Tr}(\rho P_{A\wedge B}) = \mathrm{Tr}(\rho P_A P_B) = \mathrm{Tr}(\rho P_A^2 P_B) = \mathrm{Tr} (\rho P_A P_B P_A) = \mathrm{Tr}(P_A \rho P_A P_B) = \mathrm{Tr}(P_B P_A \rho P_A P_B)##. Now use linearity of the trace to get the formula for the conditional probability.

Now we want to apply this formula to sequential measurements. Let's say we want the probability to find observable ##Y## in the set ##O_{Y}## at time ##t_2## after having found observable ##X## in the set ##O_{X}## at time ##t_1 < t_2##. This corresponds to measuring the Heisenberg observables ##X(t_1)## and ##Y(t_2)## in the same sets. Let ##\pi_X## and ##\pi_Y## be the projection valued measures of ##X## and ##Y##. The corresponding projection valued measures of ##X(t_1)## and ##Y(t_2)## are given by ##U(t_1)^\dagger \pi_X U(t_1)## and ##U(t_2)^\dagger \pi_Y U(t_2)##. Thus we are interested in the projections ##P_A = U(t_1)^\dagger \pi_X(O_X) U(t_1)## and ##P_B = U(t_2)^\dagger \pi_Y(O_Y) U(t_2)##. We assume that these operators commute, which is true up to arbitrarily small corrections for instance for filtering type measurements or after a sufficient amount of decoherence has occured. Thus we can apply the lemma and get:
##P(Y(t_2)\in O_Y | X(t_1) \in O_X) = \frac{\mathrm{Tr}(P_B P_A \rho P_A P_B)}{\mathrm{Tr}(P_A \rho P_A)} = \frac{\mathrm{Tr}(\pi_Y(O_Y) U(t_2-t_1)^\dagger \pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X) U(t_2-t_1) \pi_Y(O_Y))}{\mathrm{Tr}(\pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X))}##
This is the right formula and we only assumed the Born rule and decoherence/filtering measurements.

I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
 
  • #164
atyy said:
But can one really do without collapse? The reason I am doubtful is that at each measurement in the Schroedinger picture, the rule that is used is the Born rule, not the generalized Born rule. So by using successive applications of the Born rule, one does not get the joint probability, which the generalized Born rule gives. So rather I would say that although the generalized Born rule can be derived without collapse as a postulate, the generalized Born rule implies collapse.

I don't understand your claim here. Let's establish a bottom line, here. Are you saying that there is some sequence of measurements that can be performed such that the predicted statistics are different, depending on whether you compute them assuming collapse after each measurement, or not?
 
  • #165
stevendaryl said:
I don't understand your claim here. Let's establish a bottom line, here. Are you saying that there is some sequence of measurements that can be performed such that the predicted statistics are different, depending on whether you compute them assuming collapse after each measurement, or not?

No (and yes). Rather, I am saying that without collapse, in the Schroedinger picture, one cannot even compute the joint probability. If at t1 we measure A, the Born rule gives P(A), and at t2 we measure B, the Born rule gives P(B). With collapse, or with the generalized Born rule, one is able to compute P(A,B).

The Yes part of the answer is because there isn't necessarily a unique collapse. Thus the projection postulate is not the most general form of collapse. Different collapses do lead to different statistics.
 
  • #166
Collapse is for me, synonymous with erasement of information. Rovelli write that as we always can get new information about a finite system, ancient information has to be replaced by new.information. That is what is done with collapse.
Could you tell me what the Quantum no deleting theorem implies? Is it against collapse?
 
  • #167
atyy said:
No (and yes). Rather, I am saying that without collapse, in the Schroedinger picture, one cannot even compute the joint probability. If at t1 we measure A, the Born rule gives P(A), and at t2 we measure B, the Born rule gives P(B). With collapse, or with the generalized Born rule, one is able to compute P(A,B).

We may have gone through this before, but if so, I don't remember what conclusions we reached.

But, here are the two ways of computing the results of two sequential measurements, one that uses collapse, and one that does not. Let's assume that the measurements are performed by a machine that is simple enough (or we are smart enough) that it can be analyzed using quantum mechanics.
  1. The system to be studied is set up in state [itex]|\psi\rangle[/itex]
  2. The machine measures A (assumed to be a 0/1-valued measurement, for simplicity), and finds [itex]A[/itex] is 1.
  3. Later, the machine measures B (also assumed to 0/1-valued), and finds [itex]B[/itex] is 1.
The question is: what's the probability of these results?

Collapse way:
We compute the probability of this as follows:
  • The system is initially in state [itex]\psi[/itex]
  • We evolve [itex]\psi[/itex] forward in time to the time of step 2 above. Now the system is in state [itex]\psi'[/itex]
  • Write [itex]|\psi'\rangle = \alpha |\psi_1\rangle + \beta |\psi_0\rangle[/itex], where [itex]\psi_1[/itex] and [itex]\psi_0[/itex] are eigenstates of [itex]A[/itex] with eigenvalues 1 and 0, respectively.
  • Then the probability of getting 1 is [itex]|\alpha|^2[/itex].
  • After measuring [itex]A=1[/itex], the state collapses into state [itex]\psi_1[/itex].
  • Now, we evolve [itex]\psi_1[/itex] in time to the time of step 3 above. Now the system is in state [itex]\psi_1'[/itex].
  • Write [itex]|\psi_1'\rangle = \gamma |\psi_{11}\rangle + \delta |\psi_{01}\rangle[/itex], where [itex]\psi_{11}[/itex] and [itex]\psi_{10}[/itex] are eigenstates of [itex]B[/itex] with eigenvalues 1 and 0, respectively.
  • The probability of measuring [itex]B=1[/itex] at this point is [itex]|\gamma|^2[/itex]
  • So the probability of getting two 1s is [itex]|\alpha|^2 |\gamma|^2[/itex]
Noncollapse way:
Let's analyze the composite system [itex]|\Psi\rangle = |\psi\rangle \otimes |\phi\rangle[/itex], where [itex] |\psi\rangle[/itex] describes the system, and [itex]|\phi\rangle[/itex] describes the device. For simplicity, let's assume that the composite system has no interaction with anything else up until step 3.
  • The composite system is initially in state [itex]|\Psi\rangle[/itex]
  • Evolve the composite system to time of step 3 (We don't need to stop at step 2! That's just an ordinary quantum-mechanical interaction.) Now the system is in state [itex]|\Psi'\rangle[/itex]
  • We write [itex]|\Psi'\rangle = a |\Psi_{00}\rangle + b |\Psi_{01}\rangle + c |\Psi_{10}\rangle + d |\Psi_{11}\rangle[/itex] where:
    • [itex] |\Psi_{00}\rangle[/itex] is a state in which the measuring device has a record of getting [itex]0[/itex] for the first measurement and [itex]0[/itex] for the second measurement.
    • [itex] |\Psi_{01}\rangle[/itex] is a state in which the measuring device has a record of getting [itex]0[/itex] for the first measurement and [itex]1[/itex] for the second measurement.
    • etc.
  • Then the probability of getting two 1s in a row is [itex]|d|^2[/itex]
Okay, this is very much an oversimplification, because I ignored the interaction with the environment, and because a macroscopic device doesn't have just a single state corresponding to "measuring 1" or "measuring 0", but has a whole set of states that are consistent with those measurements. But anyway, you get the idea.

My claim is that, [itex]|d|^2 \approx |\alpha|^2 |\gamma|^2[/itex], and that the difference between them (assuming that we could actually compute [itex]d[/itex]) is completely negligible.

From this point of view, the use of "collapse" is just a calculational shortcut that avoids analyzing macroscopic devices quantum-mechanically.
 
Last edited:
  • #168
stevendaryl said:
We may have gone through this before, but if so, I don't remember what conclusions we reached.

But, here are the two ways of computing the results of two sequential measurements, one that uses collapse, and one that does not. Let's assume that the measurements are performed by a machine that is simple enough (or we are smart enough) that it can be analyzed using quantum mechanics.
  1. The system to be studied is set up in state [itex]|\psi\rangle[/itex]
  2. The machine measures A (assumed to be a 0/1-valued measurement, for simplicity), and finds [itex]A[/itex] is 1.
  3. Later, the machine measures B (also assumed to 0/1-valued), and finds [itex]B[/itex] is 1.
The question is: what's the probability of these results?

Collapse way:
We compute the probability of this as follows:
  • The system is initially in state [itex]\psi[/itex]
  • We evolve [itex]\psi[/itex] forward in time to the time of step 2 above. Now the system is in state [itex]\psi'[/itex]
  • Write [itex]|\psi'\rangle = \alpha |\psi_1\rangle + \beta |\psi_0\rangle[/itex], where [itex]\psi_1[/itex] and [itex]\psi_0[/itex] are eigenstates of [itex]A[/itex] with eigenvalues 1 and 0, respectively.
  • Then the probability of getting 1 is [itex]|\alpha|^2[/itex].
  • After measuring [itex]A=1[/itex], the state collapses into state [itex]\psi_1[/itex].
  • Now, we evolve [itex]\psi_1[/itex] in time to the time of step 3 above. Now the system is in state [itex]\psi_1'[/itex].
  • Write [itex]|\psi_1'\rangle = \gamma |\psi_{11}\rangle + \delta |\psi_{01}\rangle[/itex], where [itex]\psi_{11}[/itex] and [itex]\psi_{10}[/itex] are eigenstates of [itex]B[/itex] with eigenvalues 1 and 0, respectively.
  • The probability of measuring [itex]B=1[/itex] at this point is [itex]|\gamma|^2[/itex]
  • So the probability of getting two 1s is [itex]|\alpha|^2 |\gamma|^2[/itex]
Noncollapse way:
Let's analyze the composite system [itex]|\Psi\rangle = |\psi\rangle \otimes |\phi\rangle[/itex], where [itex] |\psi\rangle[/itex] describes the system, and [itex]|\phi\rangle[/itex] describes the device. For simplicity, let's assume that the composite system has no interaction with anything else up until step 3.
  • The composite system is initially in state [itex]|\Psi\rangle[/itex]
  • Evolve the composite system to time of step 3 (We don't need to stop at step 2! That's just an ordinary quantum-mechanical interaction.) Now the system is in state [itex]|\Psi'\rangle[/itex]
  • We write [itex]|\Psi'\rangle = a |\Psi_{00}\rangle + b |\Psi_{01}\rangle + c |\Psi_{10}\rangle + d |\Psi_{11}\rangle[/itex] where:
    • [itex] |\Psi_{00}\rangle[/itex] is a state in which the measuring device has a record of getting [itex]0[/itex] for the first measurement and [itex]0[/itex] for the second measurement.
    • [itex] |\Psi_{01}\rangle[/itex] is a state in which the measuring device has a record of getting [itex]0[/itex] for the first measurement and [itex]1[/itex] for the second measurement.
    • etc.
  • Then the probability of getting two 1s in a row is [itex]|d|^2[/itex]
Okay, this is very much an oversimplification, because I ignored the interaction with the environment, and because a macroscopic device doesn't have just a single state corresponding to "measuring 1" or "measuring 0", but has a whole set of states that are consistent with those measurements. But anyway, you get the idea.

My claim is that, [itex]|d|^2 \approx |\alpha|^2 |\gamma|^2[/itex], and that the difference between them (assuming that we could actually compute [itex]d[/itex]) is completely negligible.

From this point of view, the use of "collapse" is just a calculational shortcut that avoids analyzing macroscopic devices quantum-mechanically.

I think we (kith, you , rubi, and I) have agreed many times that this uses the deferred measurement principle, and is a way of calculating the same probabilities without using collapse. However, what we have is a simultaneous measurement at single late time, not two measurements in sequence. This is the same as avoiding nonlocality in quantum mechanics by saying that there is no reality to the distant observer, since the distant observer does not need to be real until she meets Bob. So yes, collapse can be avoided, just like nonlocality. However, one has to place some non-standard restriction on what one considers real (sequential measurements or distant observers).
 
  • #169
atyy said:
I have no problem with deriving the generalized Born rule without collapse for commuting observables. Ballentine does that, and I believe his argument is fine. The part of his argument I did not buy was his attempt to extend the argument to non-commuting observables. The important part of your argument is to circumvent the need for sequential measurement of non-commuting operators, which is fine, but it should be stated as a non-standard assumption.
I make no assumptions but the usual axioms of quantum theory minus collapse. I just shift the description of the measurement apparatus to the quantum side and that is clearly the right thing to do since an apparatus is also made of matter and is thus governed by the laws of quantum mechanics. I admit that my description of the apparatus is unrealistic, since I just treated it as a black box, rather than a multi-particle system, but that shouldn't be a conceptual problem.

Also, it does not support the idea that there is a wave function of the universe the evolves unitarily without collapse, because one still needs something in addition to the wave function, eg. the classical apparatus.
The apparatus is a quantum object. It just behaves very classical, which I enforced by modeling it using commuting observables. I don't really add something, I just use a more complex description of the physical system.

atyy said:
But can one really do without collapse? The reason I am doubtful is that at each measurement in the Schroedinger picture, the rule that is used is the Born rule, not the generalized Born rule. So by using successive applications of the Born rule, one does not get the joint probability, which the generalized Born rule gives. So rather I would say that although the generalized Born rule can be derived without collapse as a postulate, the generalized Born rule implies collapse.
I don't understand this comment. In my derivation, I have used only the Born rule ##P(X) = \mathrm{Tr}(\rho P_X)## and I also don't apply it successively. I also used that for commuting ##P_A##, ##P_B## it is true that ##P_{A\wedge B}=P_A P_B##, but this is not an axiom, but it can be derived. ##(P_A\psi = \psi) \wedge (P_B\psi=\psi) \Leftrightarrow P_A P_B \psi = \psi##.

If ##P_A##, ##P_B## don't commute, then ##A\wedge B## is meaningless in a quantum world. It is neither true nor false. Even though it seems like a perfectly meaningful logical statement, it is really just as meaningful as "at night it's colder than outside". ##A\wedge B## just isn't in the list of things that can or cannot occur. Since all pointer readings are definite facts about the world, they must be modeled by commuting projectors. On the other hand, we can never get information about a quantum system other than by looking at a pointer. So in principle, it is enough to know the generalized Born rule only for commuting observables.

atyy said:
I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
I don't see where I need that assumption. Can you point to a specific part of my calculation?
 
  • #170
A. Neumaier said:
The first half is essential, always assumed, easily verifiable, and amply verified in practice.
The second half is contrary to the idea of an ensemble, and cannot be tested in the quantum domain.
The connecting ''i.e.'' is inconsistent, since it suggests that the second half is just an equivalent interpretation of the first.
I don't understand this claim. An ensemble, by definition, is the repeated setup of independently from each other prepared single systems you like to investigate. Each measurement to verify the probabilistic properties of the quantum state you associate with this experimental setup are performed on the individual members of the ensemble.

E.g., if you measure a cross section, e.g., for Higgs production in pp collisions at the LHC, you prepare very many pp initial states in the accelerator (in form of particle bunches with a pretty well determined beam energy/momentum) and let them interact. You can consider at least the pp pairs in any bunch as independent, but experience shows you can understand the measured cross sections by forgetting about all details and just use the usual momentum-eigenstate cross sections (carefully defined in a limiting process starting from wave packets, whose width in momentum space you make arbitrarily small at the end of the calculation, as explained, e.g., nicely in Peskin/Schroeder). Of course, the cross section is a probabilistic quantity, giving the probability for producing Higgs bosons, and thus you can only measure them by repeating the reaction many times ("ensemble"), but you assume that each individual setup can be prepared in the assumed initial state you associate with the ensemble. Otherwise the ensemble interpretation doesn't make sense.
 
  • #171
atyy said:
I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
I also do not understand your quibbles here. According to QT (in the minimal interpretation) if you measure compatible observables, represented by commuting self-adjoint operators, there's no problem with the order of (filter!) measurements.

It's important to keep in mind that you discuss here a very specific (in most practical cases overidealized) class of measurements, i.e., von Neumann filter measurements. This means that if you measure observable ##A## first and then ##B## (no matter whether those are compatible with each other or not), you assume that you use measurement the measurement of ##A## for state preparation by filtering out the part of the ensemble which have a certain outcome ##a##, where ##a## is an eigenvalue of the operator ##\hat{A}## representing the observable ##A##. Since you don't know more than that now the systems within the new ensemble are prepared in some eigenstate with eigenvalue ##a##, the choice of the new state should be the projector
$$\hat{\rho}_A(a)=\sum_{\beta} |a,\beta \rangle \langle a,\beta|.$$
Now you measure ##B##. Then according to the new state you know that the probability to measure ##b## is
$$P_B(b|\rho_A)=\sum_{\gamma} \langle b,\gamma|\hat{\rho}_A(a)|b \gamma \rangle.$$
This is just Born's rule. Nowhere have I invoked a collapse hypothesis but just the assumption that I have performed an ideal filter measurement of ##A##. Whether these assumptions hold true, i.e., if you really prepare the state described by ##\hat{\rho}_A(a)## must of course checked on a sufficiently large ensemble. It can be verified completely only by measuring a complete set of compatible observables on the ensemble.
 
  • #172
vanhees71 said:
you assume that each individual setup can be prepared in the assumed initial state you associate with the ensemble.
Where do you actually use this uncheckable assumption? Never. You only use that you take the particles from one or more streams that you prepared by the same, particular setting of the acceleratior controls. This is the preparation, to which a particular state is associated.

This state (a property of the preparation procedure, as the statistical interpretation asserts) is a reasonably deterministic function of the acceleratior controls. Which state is prepared for a particular setting of the controls can be predicted by classical reasoning about the accelerator design, and can be calibrated by measuring the momentum distribution of sufficiently many particles from the stream. One can repeat it often enough to infer the quality of the preparation. Ultimately, the state says that the distribution of the particle momenta has a certain, e.g., Gaussian form. Nothing at all is assumed about the individual particles.

From this you can use quantum mechanical calculations to compute the predicted cross sections in collision experiments that use this beam. You can perform the experiment with sufficiently many particles from the stream, measure the resulting particle tracks, infer from them the particles obtained in the collision, and calculate from these the cross sections. You can then compare the results with the predictions and find agreement within the accuracy of your preparation.

Thus the design, performance and analysis of the whole experiment is described solely in terms of the preparation procedure, its state, and the experimental results.

At no point you need any information about any property of an individual particle, or any information about that they are prepared in the same state (a meaningless statement in the minimal interpretation).

By talking about the situation as if you'd need to know something about an individual system you leave the statistical setting, which never makes any assertion about an individual.
 
  • #173
Yes, that's precisely what I say. I prepare individual systems and make measurements on individual systems, repeating this procedure in the same way very many times, which forms the ensemble. Of course, I cannot make predictions for any individual outcome, as long as I have no deterministic but "only" probabilistic theories, but still for the ensemble idea to make sense you must assume that the preparation procedure, applied to individual systems, leads to the specific states (having "only" probabilistic meaning according to Born's rule), describing the ensemble you describe with the so prepared quantum state. Of course, you need the information about the preparation procedure applied to each individual particle (or bunch in the accelerator).

Another example are the Bell tests, performed, e.g., with polarization-entangled biphotons. Then you must make sure that you deal with independently prepared biphotons in a very specific (here even pure!) polarization states. The measurement is then performed on two "far-distant" places to rule out among others the communication loophole, i.e., you randomly decide what you measure in a way that there cannot be communication between these choices. Strictly speaking, this loophole is not fully closed. As I learned from Zeilinger's talk, they now plan to use entanglement swapping, using photons from very far distant pulsars, which should not in any way have been able to communicate with each other (assuming the space-time structure of GR to be correct).

That's why for me a quantum state is defined as an equivalence class of preparation procedures leading to ensembles that are described (with sufficient accuracy) by a statistical operator of the formalism (which can be a pure or a mixed state of course; in almost all cases the latter).

Of course, it's true that you have to calibrate your preparation, i.e., you must know the precise luminosity and momentum distribution of your particles before you start to measure any cross sections.
 
  • #174
vanhees71 said:
still for the ensemble idea to make sense you must assume that the preparation procedure, applied to individual systems, leads to the specific states
No. Where do you use this assumption? Nowhere!
I gave a complete account without ever mentioning anything about the particles except that they are generated by the accelerator. The state is in no way attached to the individual system - the latter are completely anonymous entities in the stream generated by the source.

The only use made of the state is as a (cumulative, average) property of the source - so why introduce the unobservable and questionable additional concept of something associated to the single system? it is as unnecessary as the Bohmian hidden variables and only invites misunderstandings (such as questions about collapse)!
 
Last edited:
  • #175
You never prepare the ensemble at once or at least it would be very difficult to prepare it without unwanted correlations. E.g., it's a (measurable!) difference whether you prepare ##N## times a polarization entangled two-photon Fock state or some state with 2N photons. Thus you have to assume that a sufficiently correct preparation procedure of the single system really leads to the ensembles you think to describe with the corresponding quantum state. Of course, never ever has a contradiction between this assumption and real experiments been found, and that's why quantum theory (in the minimal interpretation) works so successfully, but it should be very clear that the state has a meaning for an individual system, but it's only via the relation between an equivalence class of preparation procedures to have ensembles by preparing independently many individual systems in the so defined state.
 

Similar threads

  • Sticky
  • Quantum Physics
Replies
1
Views
5K
  • Quantum Physics
Replies
31
Views
4K
Replies
39
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
17
Views
1K
  • Quantum Physics
Replies
19
Views
2K
Replies
4
Views
846
Replies
35
Views
3K
Replies
46
Views
2K
  • Quantum Physics
13
Replies
445
Views
24K
Back
Top