I Is the collapse indispensable?

  • #151
A. Neumaier said:
You are an outsider - where is your record of publications in the foundations of quantum mechanics? Or at least you are using a pseudonym so that you appear to be an outsider. This in itself would not be problematic. But you are making erroneous accusations based on a lack of sufficient understanding. This is very problematic.

These explanations are valid for the Copenhagen interpretation but are meaningless in the context of the minimal (statistical) interpretation. In the minimal (statistical) interpretation discussed by Ballentine and Peres, a single system has no associated state at all. Thus your statements ''each copy of the system is in the same [resp. a different] pure state'' do not apply in their interpretation. You are seeing errors in their book only because you project your own Copenhagen-like interpretation (where a single system has a state) into a different interpretation that explicitly denies this. If a single system has no state, there is nothing that could collapse, hence there is no collapse. Upon projecting away one of the spins, an ensemble of 2-spin system in an entangled pure state automatically is an ensemble in a mixed stated of the subsystem, without anything mysterious having to be in between. Looking at conditional expectations is all that is needed to verify this. No collapse is needed.

Thus not Ballentine and Peres but your understanding of their exposition is faulty. You should apologize for having discredited highly respectable experts on the foundations of quantum mechanics on insufficient grounds.

Well, this point of view is also a bit dangerous, because what's meant by an ensemble is that you can prepare each single member of the ensemble in a well-defined way, which finally defines the idea of "state".

In the formalism, a state is just a self-adjoint trace-class 1 operator, but that's an empty phrase from the physics point of view, because physics is about real things in the lab, and thus it must be possible to define a state in an operational way for a single object, and in this sense the question of the collapse is of some importance, i.e., how can you make sure that you perpare a real-world system in a state which is described by the abstract statistical operator.

I take a pragmatic view on this: A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum. Accelerator physicists construct their devices without much use of quantum theory as far as I know, but they use the classical description of the motion of charged particles in the classical electromagnetic fields designed to achieve a particle beam of high quality (i.e., high luminosity with a pretty well defined momentum).

Also in the usually in textbooks discussed preparations in the sense of idealized von Neumann filter measurements, you can understand them in a very pragmatic way. Take the Stern-Gerlach experiment as an example. This can be fully treated quantum mechanically (although it's usually not done in the usual textbooks; for a very simple introduction, you can have a look at my QM 2 manuscript (in German) [1]). Then you have a rather well-defined spin-position entangled state with practically separated partial beams of definite spin (determined spin-z component). Then you simple block all unwanted partial beams by putting up some absorber material in the way at the corresponding position. What's left is then by construction a beam of well-defined ##\sigma_z## eigenstates.

Last but no least you have to check such claims empirically, i.e., you have to make a sufficient set of measurements to make sure that you have really prepared the state at sufficient accuracy you want.

Now comes the dilemma: We all tend to think in the naive collapse way when considering such filter measurements, assuming that the naive pragmatic way of filtering away the unwanted beams really does prepare the left beam in the way we think. This means that we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture). On the other hand, if we believe that relativistic quantum field theory provides the correct description, there's no action at a distance as implicitly assumed in the collapse hypothesis but only local interactions of the particles with all the elements of the preparation apparatus, including the "beam dumps" filtering away the unwanted beams. So you can take the collapse as a short-hand description of the preparation procedure, but not literally as something happening to the real-world entities (or ensembles of so prepared entities) without getting into a fundamental contradiction with the very foundations of local relativistic QT.

It's also interesting to see, what experts in this field think. Yesterday we had Anton Zeilinger in our Physics Colloquium, and he gave just a great talk about all his Bell experiments (including one of the recent loophole-free meausurements). In the discussion somebody asked the question about the collapse (so I could ask another question about whether the communication loophole is really closed by using "random number generators" to switch the distant measurements at A's and B's place in a way that no FTL information transfer at the two sites is possible, but that's another story). His answer was very pragmatic too: He took the epistemic point of view of Bohr (he also mentioned Heisenberg, but I'm not sure whether Bohr's and Heisenberg's view on this subject are really the same), i.e., that the quantum formalism is just a way to describe probabilities and that the collapse is indeed nothing else than updating the description due to reading off a measurement result. So at least Zeilinger, who did all these mind-boggling experiments for his whole life has a very down-to-earth no-nonsense view on this issue. I was very satisfied ;-)).
 
  • Like
Likes Mentz114
Physics news on Phys.org
  • #152
vanhees71 said:
Well, this point of view is also a bit dangerous, because what's meant by an ensemble is that you can prepare each single member of the ensemble in a well-defined way, which finally defines the idea of "state".

In the formalism, a state is just a self-adjoint trace-class 1 operator, but that's an empty phrase from the physics point of view, because physics is about real things in the lab, and thus it must be possible to define a state in an operational way for a single object, and in this sense the question of the collapse is of some importance, i.e., how can you make sure that you perpare a real-world system in a state which is described by the abstract statistical operator.

I take a pragmatic view on this: A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum. Accelerator physicists construct their devices without much use of quantum theory as far as I know, but they use the classical description of the motion of charged particles in the classical electromagnetic fields designed to achieve a particle beam of high quality (i.e., high luminosity with a pretty well defined momentum).

Also in the usually in textbooks discussed preparations in the sense of idealized von Neumann filter measurements, you can understand them in a very pragmatic way. Take the Stern-Gerlach experiment as an example. This can be fully treated quantum mechanically (although it's usually not done in the usual textbooks; for a very simple introduction, you can have a look at my QM 2 manuscript (in German) [1]). Then you have a rather well-defined spin-position entangled state with practically separated partial beams of definite spin (determined spin-z component). Then you simple block all unwanted partial beams by putting up some absorber material in the way at the corresponding position. What's left is then by construction a beam of well-defined ##\sigma_z## eigenstates.

Last but no least you have to check such claims empirically, i.e., you have to make a sufficient set of measurements to make sure that you have really prepared the state at sufficient accuracy you want.

Now comes the dilemma: We all tend to think in the naive collapse way when considering such filter measurements, assuming that the naive pragmatic way of filtering away the unwanted beams really does prepare the left beam in the way we think. This means that we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture). On the other hand, if we believe that relativistic quantum field theory provides the correct description, there's no action at a distance as implicitly assumed in the collapse hypothesis but only local interactions of the particles with all the elements of the preparation apparatus, including the "beam dumps" filtering away the unwanted beams. So you can take the collapse as a short-hand description of the preparation procedure, but not literally as something happening to the real-world entities (or ensembles of so prepared entities) without getting into a fundamental contradiction with the very foundations of local relativistic QT.

It's also interesting to see, what experts in this field think. Yesterday we had Anton Zeilinger in our Physics Colloquium, and he gave just a great talk about all his Bell experiments (including one of the recent loophole-free meausurements). In the discussion somebody asked the question about the collapse (so I could ask another question about whether the communication loophole is really closed by using "random number generators" to switch the distant measurements at A's and B's place in a way that no FTL information transfer at the two sites is possible, but that's another story). His answer was very pragmatic too: He took the epistemic point of view of Bohr (he also mentioned Heisenberg, but I'm not sure whether Bohr's and Heisenberg's view on this subject are really the same), i.e., that the quantum formalism is just a way to describe probabilities and that the collapse is indeed nothing else than updating the description due to reading off a measurement result. So at least Zeilinger, who did all these mind-boggling experiments for his whole life has a very down-to-earth no-nonsense view on this issue. I was very satisfied ;-)).

As usual, you are wrong about the foundations of local relativistic QT. But let's not discuss that further. What I wish to stress here is that I do like the idea that the collapse an updated reading. I have always objected to your use of relativity to say that it must be an undated reading and nothing else. So your quote of Zeilinger does not support your views, since he did not argue his point using relativity.
 
  • #153
I'd be very interested in a clear mathematical statement what's wrong about my view of local relativistic QFT. It's simply the standard-textbook approach, using the microcausality condition for local observables (i.e., the commutation of local (i.e., density) operators at space-like separated distances). You always claim that this standard treatment is wrong, but I've not yet seen a convincing (mathematical) argument against it. Note that the microcausality condition for the energy density operator is even necessary to have a Poincare-covariant S-matrix!
 
  • #154
vanhees71 said:
I'd be very interested in a clear mathematical statement what's wrong about my view of local relativistic QFT. It's simply the standard-textbook approach, using the microcausality condition for local observables (i.e., the commutation of local (i.e., density) operators at space-like separated distances). You always claim that this standard treatment is wrong, but I've not yet seen a convincing (mathematical) argument against it. Note that the microcausality condition for the energy density operator is even necessary to have a Poincare-covariant S-matrix!

The standard treatment is right. However, microcausality does not mean what you think it means. You think that microcausality is classical relativistic causality. But it is not.
 
  • Like
Likes zonde
  • #155
What else should it be? You only repeat the same words without giving the corresponding math to justify this claim.
 
  • #156
vanhees71 said:
What else should it be? You only repeat the same words without giving the corresponding math to justify this claim.

Bell's theorem excludes classical relativistic causality.
 
  • #157
I give up. It really goes in circles :-(.
 
  • #158
vanhees71 said:
A state is defined by a real-world experimental setup. E.g., at a particle accelerator you prepare particles in a state with a quite well-defined momentum.
This means that the state is a property of the accelerator (or the beam rotating in the magnetic loop), while particles can have (loosely, classically speaking) any momentum, just distributed according to a Gaussian (or whatever precisely is prepared). In a minimal statistical interpretation (Ballentine or Peres taken literally), you can assert nothing at all [except the possible values of any set of commuting variables] about the single system, unless the state predicts some property (such as spin or polarization) exactly.
vanhees71 said:
we assume that with this preparation procedure each single system (as a part of the ensemble used to check the probabilistic prediction of QT) is in this very state (it can of coarse as well a mixture).
This is neither consistent (in a minimal statistical interpretation) nor necessary, since it is untestable. One can test only the behavior of a large number of these systems, and so one only needs to know (or can verify) that the source (preparation) indeed prepares the individual systems in a way that the statistics is satisfied. Thus you need not (and hence should not) assume it.
 
Last edited:
  • #159
vanhees71 said:
I give up. It really goes in circles :-(.

Since this is important. Let me restate what the correct meaning of microcausality is. Microcausality is a sufficient condition to prevent superluminal signalling.

If spacelike observables do not commute, then measuring one will change the probabilities at a distant location, enabling superluminal signalling. So spacelike observables must commute. In the Heisenberg picture, the observables evolve with time. Then the cluster decomposition is a condition that ensures that even under time evolution, spacelike operators continue to commute.

The important point is that "no superluminal signalling" is not the same as "classical relativistic causality".
 
  • #160
atyy said:
The standard treatment is right. However, microcausality does not mean what you think it means.
atyy said:
Let me restate what the correct meaning of microcausality is. Microcausality is a sufficient condition to prevent superluminal signalling.
[Mentor's note: An unnecessary digression has been removed from this post]

The meaning of microcausality is defined in quantum field theory. ''No superluminal signalling'' is not the real meaning of microcausality but only a minor consequence among the many far more important consequences microcausality has, such as cluster decomposition, well-defined S-matrices, etc..
 
Last edited by a moderator:
  • #161
A. Neumaier said:
This means that the state is a property of the accelerator (or the beam rotating in the magnetic loop), while particles can have (loosely, classically speaking) any momentum, just distributed according to a Gaussian (or whatever precisely is prepared). In a minimal statistical interpretation (Ballentine or Peres taken literally), you can assert nothing at all [except the possible values of any set of commuting variables] about the single system, unless the state predicts some property (such as spin or polarization) exactly.
Agreed, but as I tried to say in the quoted posting, for the very definition of the ensemble you need a clear association between what the state, providing probabilistic and only probabilistic meaning about the system, describes and the single member of the ensemble, i.e., the preparation procedure (or an equivalence class of preparation procedures) must define a single system to be prepared in that state, although of course, since there is only probabilistic content in the knowledge of the state, you can test the correctness of the association of the state only by measuring at least a complete set of compatible observables on a repeatedly prepared system, i.e., an ensemble.

This is neither consistent (in a minimal statistical interpretation) nor necessary, since it is untestable. One can test only the behavior of a large number of these systems, and so one only needs to know (or can verify) that the source (preparation) indeed prepares the individual systems in a way that the statistics is satisfied. Thus you should not assume it.
Well, then you cannot even test QT in principle. You must assume that it is possible to prepare an ensemble, i.e., each single system within the ensemble, reproducibly in the state you claim to verify or falsify by making repeated measurements on the system.
 
  • #162
vanhees71 said:
You must assume that it is possible to prepare an ensemble, i.e., each single system within the ensemble
The first half is essential, always assumed, easily verifiable, and amply verified in practice.
The second half is contrary to the idea of an ensemble, and cannot be tested in the quantum domain.
The connecting ''i.e.'' is inconsistent, since it suggests that the second half is just an equivalent interpretation of the first.

In the case of an accelerator, you can at any time take 10000 systems from the rotating stream and verify that it satisfies the statistics. There is no need to assume that each individual is ''the same'' or ''identically prepared'' in any sense. It is just one of the systems prepared in the stream. The stationarity of the stream is the physical equivalent of the ''identically distributed'' assumption in matheamtical statistics.

On the other hand, if you take out just one system and measure a single observable, you get just a random result, about which the preparation predicts nothing (except if the statistics is so sharp that it predicts a single value within the experimental uncertainty). This is the reason one cannot say anything about the single system. This is characteristic of any stochastic (classical or quantum) system. So what should it mean that each single system is prepared in the same way apart from that it is part of the prepared ensemble? It cannot mean anything, so talking as if there were additional meaning is irritating.

Thus if you subscribe to the statistical interpretation you should adapt your language accordingly. Nothing is lost when making the language more precise to better reflect one's interpretation. But a lot of clarity is gained, and people are less likely to misunderstand your position.
 
Last edited:
  • #163
rubi said:
Lemma (conditional probability) For commuting projections ##P_A## and ##P_B##, the conditional probability ##P(B|A)## in the state ##\rho## is given by
##P(B|A) = \mathrm{Tr}\left(\frac{P_A \rho P_A}{\mathrm{Tr}(P_A \rho P_A)} P_B\right) = \frac{\mathrm{Tr}\left(P_B P_A \rho P_A P_B\right)}{\mathrm{Tr}(P_A \rho P_A)}##.
Proof. The conditional probability is defined by ##P(B|A) = \frac{P(A\wedge B)}{P(A)}##. The projection operator for ##A\wedge B## is given by ##P_{A\wedge B}=P_A P_B##. The Born rule tells us that ##P(A)=\mathrm{Tr}(\rho P_A)## and
##P(A\wedge B) = \mathrm{Tr}(\rho P_{A\wedge B}) = \mathrm{Tr}(\rho P_A P_B) = \mathrm{Tr}(\rho P_A^2 P_B) = \mathrm{Tr} (\rho P_A P_B P_A) = \mathrm{Tr}(P_A \rho P_A P_B) = \mathrm{Tr}(P_B P_A \rho P_A P_B)##. Now use linearity of the trace to get the formula for the conditional probability.

Now we want to apply this formula to sequential measurements. Let's say we want the probability to find observable ##Y## in the set ##O_{Y}## at time ##t_2## after having found observable ##X## in the set ##O_{X}## at time ##t_1 < t_2##. This corresponds to measuring the Heisenberg observables ##X(t_1)## and ##Y(t_2)## in the same sets. Let ##\pi_X## and ##\pi_Y## be the projection valued measures of ##X## and ##Y##. The corresponding projection valued measures of ##X(t_1)## and ##Y(t_2)## are given by ##U(t_1)^\dagger \pi_X U(t_1)## and ##U(t_2)^\dagger \pi_Y U(t_2)##. Thus we are interested in the projections ##P_A = U(t_1)^\dagger \pi_X(O_X) U(t_1)## and ##P_B = U(t_2)^\dagger \pi_Y(O_Y) U(t_2)##. We assume that these operators commute, which is true up to arbitrarily small corrections for instance for filtering type measurements or after a sufficient amount of decoherence has occured. Thus we can apply the lemma and get:
##P(Y(t_2)\in O_Y | X(t_1) \in O_X) = \frac{\mathrm{Tr}(P_B P_A \rho P_A P_B)}{\mathrm{Tr}(P_A \rho P_A)} = \frac{\mathrm{Tr}(\pi_Y(O_Y) U(t_2-t_1)^\dagger \pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X) U(t_2-t_1) \pi_Y(O_Y))}{\mathrm{Tr}(\pi_X(O_X) U(t_1)^\dagger \rho U(t_1) \pi_X(O_X))}##
This is the right formula and we only assumed the Born rule and decoherence/filtering measurements.

I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
 
  • #164
atyy said:
But can one really do without collapse? The reason I am doubtful is that at each measurement in the Schroedinger picture, the rule that is used is the Born rule, not the generalized Born rule. So by using successive applications of the Born rule, one does not get the joint probability, which the generalized Born rule gives. So rather I would say that although the generalized Born rule can be derived without collapse as a postulate, the generalized Born rule implies collapse.

I don't understand your claim here. Let's establish a bottom line, here. Are you saying that there is some sequence of measurements that can be performed such that the predicted statistics are different, depending on whether you compute them assuming collapse after each measurement, or not?
 
  • #165
stevendaryl said:
I don't understand your claim here. Let's establish a bottom line, here. Are you saying that there is some sequence of measurements that can be performed such that the predicted statistics are different, depending on whether you compute them assuming collapse after each measurement, or not?

No (and yes). Rather, I am saying that without collapse, in the Schroedinger picture, one cannot even compute the joint probability. If at t1 we measure A, the Born rule gives P(A), and at t2 we measure B, the Born rule gives P(B). With collapse, or with the generalized Born rule, one is able to compute P(A,B).

The Yes part of the answer is because there isn't necessarily a unique collapse. Thus the projection postulate is not the most general form of collapse. Different collapses do lead to different statistics.
 
  • #166
Collapse is for me, synonymous with erasement of information. Rovelli write that as we always can get new information about a finite system, ancient information has to be replaced by new.information. That is what is done with collapse.
Could you tell me what the Quantum no deleting theorem implies? Is it against collapse?
 
  • #167
atyy said:
No (and yes). Rather, I am saying that without collapse, in the Schroedinger picture, one cannot even compute the joint probability. If at t1 we measure A, the Born rule gives P(A), and at t2 we measure B, the Born rule gives P(B). With collapse, or with the generalized Born rule, one is able to compute P(A,B).

We may have gone through this before, but if so, I don't remember what conclusions we reached.

But, here are the two ways of computing the results of two sequential measurements, one that uses collapse, and one that does not. Let's assume that the measurements are performed by a machine that is simple enough (or we are smart enough) that it can be analyzed using quantum mechanics.
  1. The system to be studied is set up in state |\psi\rangle
  2. The machine measures A (assumed to be a 0/1-valued measurement, for simplicity), and finds A is 1.
  3. Later, the machine measures B (also assumed to 0/1-valued), and finds B is 1.
The question is: what's the probability of these results?

Collapse way:
We compute the probability of this as follows:
  • The system is initially in state \psi
  • We evolve \psi forward in time to the time of step 2 above. Now the system is in state \psi&#039;
  • Write |\psi&#039;\rangle = \alpha |\psi_1\rangle + \beta |\psi_0\rangle, where \psi_1 and \psi_0 are eigenstates of A with eigenvalues 1 and 0, respectively.
  • Then the probability of getting 1 is |\alpha|^2.
  • After measuring A=1, the state collapses into state \psi_1.
  • Now, we evolve \psi_1 in time to the time of step 3 above. Now the system is in state \psi_1&#039;.
  • Write |\psi_1&#039;\rangle = \gamma |\psi_{11}\rangle + \delta |\psi_{01}\rangle, where \psi_{11} and \psi_{10} are eigenstates of B with eigenvalues 1 and 0, respectively.
  • The probability of measuring B=1 at this point is |\gamma|^2
  • So the probability of getting two 1s is |\alpha|^2 |\gamma|^2
Noncollapse way:
Let's analyze the composite system |\Psi\rangle = |\psi\rangle \otimes |\phi\rangle, where |\psi\rangle describes the system, and |\phi\rangle describes the device. For simplicity, let's assume that the composite system has no interaction with anything else up until step 3.
  • The composite system is initially in state |\Psi\rangle
  • Evolve the composite system to time of step 3 (We don't need to stop at step 2! That's just an ordinary quantum-mechanical interaction.) Now the system is in state |\Psi&#039;\rangle
  • We write |\Psi&#039;\rangle = a |\Psi_{00}\rangle + b |\Psi_{01}\rangle + c |\Psi_{10}\rangle + d |\Psi_{11}\rangle where:
    • |\Psi_{00}\rangle is a state in which the measuring device has a record of getting 0 for the first measurement and 0 for the second measurement.
    • |\Psi_{01}\rangle is a state in which the measuring device has a record of getting 0 for the first measurement and 1 for the second measurement.
    • etc.
  • Then the probability of getting two 1s in a row is |d|^2
Okay, this is very much an oversimplification, because I ignored the interaction with the environment, and because a macroscopic device doesn't have just a single state corresponding to "measuring 1" or "measuring 0", but has a whole set of states that are consistent with those measurements. But anyway, you get the idea.

My claim is that, |d|^2 \approx |\alpha|^2 |\gamma|^2, and that the difference between them (assuming that we could actually compute d) is completely negligible.

From this point of view, the use of "collapse" is just a calculational shortcut that avoids analyzing macroscopic devices quantum-mechanically.
 
Last edited:
  • #168
stevendaryl said:
We may have gone through this before, but if so, I don't remember what conclusions we reached.

But, here are the two ways of computing the results of two sequential measurements, one that uses collapse, and one that does not. Let's assume that the measurements are performed by a machine that is simple enough (or we are smart enough) that it can be analyzed using quantum mechanics.
  1. The system to be studied is set up in state |\psi\rangle
  2. The machine measures A (assumed to be a 0/1-valued measurement, for simplicity), and finds A is 1.
  3. Later, the machine measures B (also assumed to 0/1-valued), and finds B is 1.
The question is: what's the probability of these results?

Collapse way:
We compute the probability of this as follows:
  • The system is initially in state \psi
  • We evolve \psi forward in time to the time of step 2 above. Now the system is in state \psi&#039;
  • Write |\psi&#039;\rangle = \alpha |\psi_1\rangle + \beta |\psi_0\rangle, where \psi_1 and \psi_0 are eigenstates of A with eigenvalues 1 and 0, respectively.
  • Then the probability of getting 1 is |\alpha|^2.
  • After measuring A=1, the state collapses into state \psi_1.
  • Now, we evolve \psi_1 in time to the time of step 3 above. Now the system is in state \psi_1&#039;.
  • Write |\psi_1&#039;\rangle = \gamma |\psi_{11}\rangle + \delta |\psi_{01}\rangle, where \psi_{11} and \psi_{10} are eigenstates of B with eigenvalues 1 and 0, respectively.
  • The probability of measuring B=1 at this point is |\gamma|^2
  • So the probability of getting two 1s is |\alpha|^2 |\gamma|^2
Noncollapse way:
Let's analyze the composite system |\Psi\rangle = |\psi\rangle \otimes |\phi\rangle, where |\psi\rangle describes the system, and |\phi\rangle describes the device. For simplicity, let's assume that the composite system has no interaction with anything else up until step 3.
  • The composite system is initially in state |\Psi\rangle
  • Evolve the composite system to time of step 3 (We don't need to stop at step 2! That's just an ordinary quantum-mechanical interaction.) Now the system is in state |\Psi&#039;\rangle
  • We write |\Psi&#039;\rangle = a |\Psi_{00}\rangle + b |\Psi_{01}\rangle + c |\Psi_{10}\rangle + d |\Psi_{11}\rangle where:
    • |\Psi_{00}\rangle is a state in which the measuring device has a record of getting 0 for the first measurement and 0 for the second measurement.
    • |\Psi_{01}\rangle is a state in which the measuring device has a record of getting 0 for the first measurement and 1 for the second measurement.
    • etc.
  • Then the probability of getting two 1s in a row is |d|^2
Okay, this is very much an oversimplification, because I ignored the interaction with the environment, and because a macroscopic device doesn't have just a single state corresponding to "measuring 1" or "measuring 0", but has a whole set of states that are consistent with those measurements. But anyway, you get the idea.

My claim is that, |d|^2 \approx |\alpha|^2 |\gamma|^2, and that the difference between them (assuming that we could actually compute d) is completely negligible.

From this point of view, the use of "collapse" is just a calculational shortcut that avoids analyzing macroscopic devices quantum-mechanically.

I think we (kith, you , rubi, and I) have agreed many times that this uses the deferred measurement principle, and is a way of calculating the same probabilities without using collapse. However, what we have is a simultaneous measurement at single late time, not two measurements in sequence. This is the same as avoiding nonlocality in quantum mechanics by saying that there is no reality to the distant observer, since the distant observer does not need to be real until she meets Bob. So yes, collapse can be avoided, just like nonlocality. However, one has to place some non-standard restriction on what one considers real (sequential measurements or distant observers).
 
  • #169
atyy said:
I have no problem with deriving the generalized Born rule without collapse for commuting observables. Ballentine does that, and I believe his argument is fine. The part of his argument I did not buy was his attempt to extend the argument to non-commuting observables. The important part of your argument is to circumvent the need for sequential measurement of non-commuting operators, which is fine, but it should be stated as a non-standard assumption.
I make no assumptions but the usual axioms of quantum theory minus collapse. I just shift the description of the measurement apparatus to the quantum side and that is clearly the right thing to do since an apparatus is also made of matter and is thus governed by the laws of quantum mechanics. I admit that my description of the apparatus is unrealistic, since I just treated it as a black box, rather than a multi-particle system, but that shouldn't be a conceptual problem.

Also, it does not support the idea that there is a wave function of the universe the evolves unitarily without collapse, because one still needs something in addition to the wave function, eg. the classical apparatus.
The apparatus is a quantum object. It just behaves very classical, which I enforced by modeling it using commuting observables. I don't really add something, I just use a more complex description of the physical system.

atyy said:
But can one really do without collapse? The reason I am doubtful is that at each measurement in the Schroedinger picture, the rule that is used is the Born rule, not the generalized Born rule. So by using successive applications of the Born rule, one does not get the joint probability, which the generalized Born rule gives. So rather I would say that although the generalized Born rule can be derived without collapse as a postulate, the generalized Born rule implies collapse.
I don't understand this comment. In my derivation, I have used only the Born rule ##P(X) = \mathrm{Tr}(\rho P_X)## and I also don't apply it successively. I also used that for commuting ##P_A##, ##P_B## it is true that ##P_{A\wedge B}=P_A P_B##, but this is not an axiom, but it can be derived. ##(P_A\psi = \psi) \wedge (P_B\psi=\psi) \Leftrightarrow P_A P_B \psi = \psi##.

If ##P_A##, ##P_B## don't commute, then ##A\wedge B## is meaningless in a quantum world. It is neither true nor false. Even though it seems like a perfectly meaningful logical statement, it is really just as meaningful as "at night it's colder than outside". ##A\wedge B## just isn't in the list of things that can or cannot occur. Since all pointer readings are definite facts about the world, they must be modeled by commuting projectors. On the other hand, we can never get information about a quantum system other than by looking at a pointer. So in principle, it is enough to know the generalized Born rule only for commuting observables.

atyy said:
I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
I don't see where I need that assumption. Can you point to a specific part of my calculation?
 
  • #170
A. Neumaier said:
The first half is essential, always assumed, easily verifiable, and amply verified in practice.
The second half is contrary to the idea of an ensemble, and cannot be tested in the quantum domain.
The connecting ''i.e.'' is inconsistent, since it suggests that the second half is just an equivalent interpretation of the first.
I don't understand this claim. An ensemble, by definition, is the repeated setup of independently from each other prepared single systems you like to investigate. Each measurement to verify the probabilistic properties of the quantum state you associate with this experimental setup are performed on the individual members of the ensemble.

E.g., if you measure a cross section, e.g., for Higgs production in pp collisions at the LHC, you prepare very many pp initial states in the accelerator (in form of particle bunches with a pretty well determined beam energy/momentum) and let them interact. You can consider at least the pp pairs in any bunch as independent, but experience shows you can understand the measured cross sections by forgetting about all details and just use the usual momentum-eigenstate cross sections (carefully defined in a limiting process starting from wave packets, whose width in momentum space you make arbitrarily small at the end of the calculation, as explained, e.g., nicely in Peskin/Schroeder). Of course, the cross section is a probabilistic quantity, giving the probability for producing Higgs bosons, and thus you can only measure them by repeating the reaction many times ("ensemble"), but you assume that each individual setup can be prepared in the assumed initial state you associate with the ensemble. Otherwise the ensemble interpretation doesn't make sense.
 
  • #171
atyy said:
I made comments in posts #148 and #150 on a later post of yours. One more comment on this earlier post. In deriving the generalized Born rule for commuting sequential measurements, doesn't one have to assume that the order of the measurements does not alter the joint probabilities? That seems like a generalization of Dirac's requirement (for sharp measurements with discrete spectra) that immediate repetition of a measurement gives the same result, from which Dirac derives the projection postulate. If this is right, then it remains true that one needs an additional postulate beyond the Born rule. One need not state the projection postulate explicitly, but like Dirac, something additional has to be introduced, eg, immediate repetition yields the same outcome.
I also do not understand your quibbles here. According to QT (in the minimal interpretation) if you measure compatible observables, represented by commuting self-adjoint operators, there's no problem with the order of (filter!) measurements.

It's important to keep in mind that you discuss here a very specific (in most practical cases overidealized) class of measurements, i.e., von Neumann filter measurements. This means that if you measure observable ##A## first and then ##B## (no matter whether those are compatible with each other or not), you assume that you use measurement the measurement of ##A## for state preparation by filtering out the part of the ensemble which have a certain outcome ##a##, where ##a## is an eigenvalue of the operator ##\hat{A}## representing the observable ##A##. Since you don't know more than that now the systems within the new ensemble are prepared in some eigenstate with eigenvalue ##a##, the choice of the new state should be the projector
$$\hat{\rho}_A(a)=\sum_{\beta} |a,\beta \rangle \langle a,\beta|.$$
Now you measure ##B##. Then according to the new state you know that the probability to measure ##b## is
$$P_B(b|\rho_A)=\sum_{\gamma} \langle b,\gamma|\hat{\rho}_A(a)|b \gamma \rangle.$$
This is just Born's rule. Nowhere have I invoked a collapse hypothesis but just the assumption that I have performed an ideal filter measurement of ##A##. Whether these assumptions hold true, i.e., if you really prepare the state described by ##\hat{\rho}_A(a)## must of course checked on a sufficiently large ensemble. It can be verified completely only by measuring a complete set of compatible observables on the ensemble.
 
  • #172
vanhees71 said:
you assume that each individual setup can be prepared in the assumed initial state you associate with the ensemble.
Where do you actually use this uncheckable assumption? Never. You only use that you take the particles from one or more streams that you prepared by the same, particular setting of the acceleratior controls. This is the preparation, to which a particular state is associated.

This state (a property of the preparation procedure, as the statistical interpretation asserts) is a reasonably deterministic function of the acceleratior controls. Which state is prepared for a particular setting of the controls can be predicted by classical reasoning about the accelerator design, and can be calibrated by measuring the momentum distribution of sufficiently many particles from the stream. One can repeat it often enough to infer the quality of the preparation. Ultimately, the state says that the distribution of the particle momenta has a certain, e.g., Gaussian form. Nothing at all is assumed about the individual particles.

From this you can use quantum mechanical calculations to compute the predicted cross sections in collision experiments that use this beam. You can perform the experiment with sufficiently many particles from the stream, measure the resulting particle tracks, infer from them the particles obtained in the collision, and calculate from these the cross sections. You can then compare the results with the predictions and find agreement within the accuracy of your preparation.

Thus the design, performance and analysis of the whole experiment is described solely in terms of the preparation procedure, its state, and the experimental results.

At no point you need any information about any property of an individual particle, or any information about that they are prepared in the same state (a meaningless statement in the minimal interpretation).

By talking about the situation as if you'd need to know something about an individual system you leave the statistical setting, which never makes any assertion about an individual.
 
  • #173
Yes, that's precisely what I say. I prepare individual systems and make measurements on individual systems, repeating this procedure in the same way very many times, which forms the ensemble. Of course, I cannot make predictions for any individual outcome, as long as I have no deterministic but "only" probabilistic theories, but still for the ensemble idea to make sense you must assume that the preparation procedure, applied to individual systems, leads to the specific states (having "only" probabilistic meaning according to Born's rule), describing the ensemble you describe with the so prepared quantum state. Of course, you need the information about the preparation procedure applied to each individual particle (or bunch in the accelerator).

Another example are the Bell tests, performed, e.g., with polarization-entangled biphotons. Then you must make sure that you deal with independently prepared biphotons in a very specific (here even pure!) polarization states. The measurement is then performed on two "far-distant" places to rule out among others the communication loophole, i.e., you randomly decide what you measure in a way that there cannot be communication between these choices. Strictly speaking, this loophole is not fully closed. As I learned from Zeilinger's talk, they now plan to use entanglement swapping, using photons from very far distant pulsars, which should not in any way have been able to communicate with each other (assuming the space-time structure of GR to be correct).

That's why for me a quantum state is defined as an equivalence class of preparation procedures leading to ensembles that are described (with sufficient accuracy) by a statistical operator of the formalism (which can be a pure or a mixed state of course; in almost all cases the latter).

Of course, it's true that you have to calibrate your preparation, i.e., you must know the precise luminosity and momentum distribution of your particles before you start to measure any cross sections.
 
  • #174
vanhees71 said:
still for the ensemble idea to make sense you must assume that the preparation procedure, applied to individual systems, leads to the specific states
No. Where do you use this assumption? Nowhere!
I gave a complete account without ever mentioning anything about the particles except that they are generated by the accelerator. The state is in no way attached to the individual system - the latter are completely anonymous entities in the stream generated by the source.

The only use made of the state is as a (cumulative, average) property of the source - so why introduce the unobservable and questionable additional concept of something associated to the single system? it is as unnecessary as the Bohmian hidden variables and only invites misunderstandings (such as questions about collapse)!
 
Last edited:
  • #175
You never prepare the ensemble at once or at least it would be very difficult to prepare it without unwanted correlations. E.g., it's a (measurable!) difference whether you prepare ##N## times a polarization entangled two-photon Fock state or some state with 2N photons. Thus you have to assume that a sufficiently correct preparation procedure of the single system really leads to the ensembles you think to describe with the corresponding quantum state. Of course, never ever has a contradiction between this assumption and real experiments been found, and that's why quantum theory (in the minimal interpretation) works so successfully, but it should be very clear that the state has a meaning for an individual system, but it's only via the relation between an equivalence class of preparation procedures to have ensembles by preparing independently many individual systems in the so defined state.
 
  • #176
vanhees71 said:
it should be very clear that the state has a meaning for an individual system
This is not very clear - on the contrary, it directly contradicts the minimal statistical interpretation. It is neither minimal nor statistical, but the source of all troubles, including atyy's claim that the statistical interpretation needs a collapse. Every interpretation that attaches a state to the individual system needs the collapse, whereas if an individual system has no associated state there isn't even a way to say what the collapse should mean.

vanhees71 said:
it's a (measurable!) difference whether you prepare N times a polarization entangled two-photon Fock state or some state with 2N photons.
Indeed, the two sources create two essentially different systems. In the first case, each individual system contains exactly 2 photons, while in the second case, each individual system contains exactly 2N photons.

In both cases, one can verify what the source produces by making experiments on a large number of these systems, and in this way distinguish the two.

And in both cases, the individual system has no state, it has only the general characteristics of a quantum system that are independent of its state (in this case, the fixed number of photons they contain).
 
  • #177
I would like to say something on the topic of this thread.
I think that collapse can't be simply removed for the following reasons:
1. QM with collapse is standard. So if we say that QM predictions are experimentally verified we mean QM with collapse. Present attempts at removing collapse are making additional assumptions (about how to model measurement). Adding assumptions is justified only if we arrive at new predictions so that we can verify these new assumptions. Otherwise we have to stick to less specific model.
2. Attempts at removing collapse assume that macroscopic (classical) objects (experiment equipment) are very complex quantum systems that can be approximated as simple quantum systems. I think this is assumption is false. And the reason is following. In case of simple quantum systems we take into consideration any particle that participates in interactions within the system, that way we make a clear cut between the system under consideration and the rest of the world. However in case of large classical systems we ignore environment even so the system is constantly interacting with environment. So basically the cut is impossible to make, at one moment photon is part of environment (does not yet belong to the system) at next moment it is absorbed (belongs to the system) and yet a moment later it (another one) is heading away from the system (does not belong to the system any more). We can compare it with living being who is breathing the air. We can not place a clear cut between living thing and it's environment while we can do that for a piece of rock.
 
  • #178
zonde said:
QM with collapse is standard.
No. Collapse occurs only in some of the standard interpretations. Collapse is certainly not part of shut-up-and-calculate, the part on which everyone agrees.

Only the Born rule (which doesn't say anything about the state after measurement) is standard, when applied in the appropriate context (e.g., in scattering experiments).
zonde said:
In case of simple quantum systems we take into consideration any particle that participates in interactions within the system.
No. You simply ignore most things. The whole detector participates in the interaction with the system, otherwise it couldn't detect anything. Thus it would have to be taken into consideration, according to your claim. Simply ignoring this and replacing the interaction by collapse is obviously an approximation. In addition, there are losses due to contact with the transmission medium and its boundary. Again these consist of lots of particles interacting with the system (otherwise the system could not lose anything to them). You ignore this, too. The typical Bell-type experiments ignores all these issues and replaces them by approximate reasoning about efficiency.
 
Last edited:
  • #179
A. Neumaier said:
No. Collapse occurs only in some of the standard interpretations. Collapse is certainly not part of shut-up-and-calculate, the part on which everyone agrees.
It's not so much a question about the part on which everyone agrees but rather about the approach that is most universally used to get clear unequivocal predictions for real experimental setups. But it would be better to her a confirmation from some experimentalist.

A. Neumaier said:
In addition, there are losses due to contact with the transmission medium and its boundary. Again these consist of lots of particles interacting with the system (otherwise the system could not lose anything to them). You ignore this, too. The typical Bell-type experiments ignores all these issues and replaces them by approximate reasoning about efficiency.
Yes, here you are right.
 
  • #180
zonde said:
that is most universally used to get clear unequivocal predictions
The predictions depend only on the shut-up-and-calculate part.
 
  • #181
atyy said:
I think we (kith, you , rubi, and I) have agreed many times that this uses the deferred measurement principle, and is a way of calculating the same probabilities without using collapse. However, what we have is a simultaneous measurement at single late time, not two measurements in sequence. This is the same as avoiding nonlocality in quantum mechanics by saying that there is no reality to the distant observer, since the distant observer does not need to be real until she meets Bob. So yes, collapse can be avoided, just like nonlocality. However, one has to place some non-standard restriction on what one considers real (sequential measurements or distant observers).

Okay, but isn't that more of a matter of qualms about the interpretation of QM? If you can get the same numbers (in theory) without collapse, doesn't that show that collapse isn't "indispensable"?
 
  • #182
A. Neumaier said:
Indeed, the two sources create two essentially different systems. In the first case, each individual system contains exactly 2 photons, while in the second case, each individual system contains exactly 2N photons.

In both cases, one can verify what the source produces by making experiments on a large number of these systems, and in this way distinguish the two.

And in both cases, the individual system has no state, it has only the general characteristics of a quantum system that are independent of its state (in this case, the fixed number of photons they contain).
Now you yourself admit that the state (an equivalence class of a preparation procedure of a single (!) system) has a meaning for the individual system. You can't have ensembles if you can't prepare individual systems. I also agree that you can test the assumption whether you really have prepared some specific state can only be done on the ensemble since probabilistic statements are meaningless for an individual system. That's the minimal interpretation: Specifying the state of an individual system has only very limited meaning concerning the observable facts about this system. The only statement you can make is that, if you have prepared the system in some state, where some observable is determined, i.e., if measured on the individual system you get a predetermined value, which is the eigenvalue of the self-adjoint operator representing that observable. The statistical operator describing the state then must be of the form
$$\hat{\rho}_A(a)=\sum_{j} p_j |a,j \rangle \langle a,j|, \quad \sum_j p_j=1,\quad p_j \geq 0.$$
The ##|a,j \rangle## span the eigenspace of ##\hat{A}## of eigenvalue ##a##.

Your last paragraph simply describes an unprepared system. Then you cannot even associate a state to an ensemble or better said you don't even have an ensemble, because it's not said how to specify it.
 
  • #183
vanhees71 said:
You can't have ensembles if you can't prepare individual systems
One prepares individual systems, according to the statistical interpretation, but these individual systems have no state, since the state is a property of the ensemble only, not of the individual systems.
vanhees71 said:
you yourself admit that the state (an equivalence class of a preparation procedure of a single (!) system) has a meaning for the individual system.
No. Only measurable properties that are definite in the state of the ensemble have a meaning for the individual; in the present case the number of particles specifying the individual system - since this is common to all individual systems by definition of the ensemble. But the individual system has no state - it only has the definite properties common to all individual systems in the preparation.
vanhees71 said:
Your last paragraph simply describes an unprepared system.
No. It describes the individual systems in a preparation whose state has definite particle number but otherwise only statistical properties that depend on what is prepared and measured. For example, in an electron accelerator one prepares an electron beam whose individual systems are known to be electrons but whose other properties are undetermined and depend on the measurement performed on them, according to the momentum distribution determined by the state (the detailed preparation).

In fact, strictly speaking, the measurement results are not even properties of the individual system but properties of the detector in contact with the particle field determined by the preparation. One can completely avoid mentioning the individual microscopic systems. Indeed, what one measures in a collision experiment are ionization tracks and tracks of deposited energy - properties of the detection fluid or wires. Quantum mechanics predicts how the statistics of the tracks in the detector is related to the state of the source, both macroscopically determined stuff.

The particles themselves remain invisible and their properties may even be regarded as completely hypothetical. That we say we measured the track of a particle is already an interpretation of the measurement results, even a problematic one: In the most orthodox setting where only properties can be measured that correspond to commuting operators, a quantum particle should not have a track, since a track implies fairly definite position and momentum simultaneously!
 
  • Like
Likes vanhees71
  • #184
rubi said:
It shows that every quantum theory that requires collapse can be converted into one that evolves purely unitarily and makes the same predictions. Here is the recipe:
We start with a Hilbert space ##\mathcal H##, a unitary time evolution ##U(t)## and a set of (possibly non-commuting) observables ##(X_i)_{i=1}^n##. We define the Hilbert space ##\hat{\mathcal H} = \mathcal H\otimes\underbrace{\mathcal H \otimes \cdots \mathcal H}_{n \,\text{times}}##. We define the time evolution ##\hat U(t) \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = (U(t)\psi)\otimes\phi_1\otimes\cdots\otimes\phi_n## and the pointer observables ##\hat X_i \psi\otimes\phi_1\otimes\cdots\otimes\phi_n = \psi\otimes\phi_1\otimes\cdots\otimes (X_i\phi_i)\otimes\cdots\otimes\phi_n##. First, we note that ##\left[\hat X_i,\hat X_j\right]=0##, so we can apply the previous result. Now, for every observable ##X_i## with ##X_i\xi_{i k} = \lambda_{i k}\xi_{i k}## (I assume discrete spectrum here, so I don't have to dive into direct integrals), we introduce the unitary von Neumann measurements ##U_i \left(\sum_k c_k\xi_{i k}\right)\otimes\phi_1\otimes\cdots\otimes\phi_n = \sum_k c_k \xi_{i k} \otimes\phi_1\otimes\cdots\otimes \xi_{i k} \otimes\cdots\otimes\phi_n##. Whenever a measurement of an observable ##X_i## is performed, we apply the corresponding unitary operator ##U_i## to the state. Thus, all time evolutions are given by unitary operators (either ##\hat U(t)## or ##U_i##) and thus the whole system evolves unitarily. Moreover, all predictions of QM with collapse, including joint and conditional probabilities, are reproduced exactly, without ever having to use the collapse postulate.

Of course, this is the least realistic model of measurement devices possible, but one can always put more effort in better models.

In this paper Pati shows that when an unknown bit becomes a part of two entangled bits there is a unitary process such that the inital state of the bit is not erased but copied to a third bit.
 
  • #185
A. Neumaier said:
One prepares individual systems, according to the statistical interpretation, but these individual systems have no state, since the state is a property of the ensemble only, not of the individual systems.

No. Only measurable properties that are definite in the state of the ensemble have a meaning for the individual; in the present case the number of particles specifying the individual system - since this is common to all individual systems by definition of the ensemble. But the individual system has no state - it only has the definite properties common to all individual systems in the preparation.

No. It describes the individual systems in a preparation whose state has definite particle number but otherwise only statistical properties that depend on what is prepared and measured. For example, in an electron accelerator one prepares an electron beam whose individual systems are known to be electrons but whose other properties are undetermined and depend on the measurement performed on them, according to the momentum distribution determined by the state (the detailed preparation).

In fact, strictly speaking, the measurement results are not even properties of the individual system but properties of the detector in contact with the particle field determined by the preparation. One can completely avoid mentioning the individual microscopic systems. Indeed, what one measures in a collision experiment are ionization tracks and tracks of deposited energy - properties of the detection fluid or wires. Quantum mechanics predicts how the statistics of the tracks in the detector is related to the state of the source, both macroscopically determined stuff.

The particles themselves remain invisible and their properties may even be regarded as completely hypothetical. That we say we measured the track of a particle is already an interpretation of the measurement results, even a problematic one: In the most orthodox setting where only properties can be measured that correspond to commuting operators, a quantum particle should not have a track, since a track implies fairly definite position and momentum simultaneously!

I can fully agree with that formulation. The association of the state with the individual systems forming the ensemble is the common equivalence class of preparation procedures. For the single system only that observables have a definite value that have been prepared. About everything else you have only probabilistic information, which has a meaning only for the ensemble.

The question about the "tracks" of single particles in, e.g., a cloud chamber has been fully understood by Mott as early as 1929

N. F. Mott, The Wave Mechanics of ##\alpha##-ray tracks, Procs. Roy. Soc. A 126, 79 (1929)
http://rspa.royalsocietypublishing.org/content/126/800/79

It's, of course, due to the interaction of the particle with the matter in the detector which makes the particle appear as if moving on a "track". Of course, that's a pretty coarse-grained picture of the partice.
 

Similar threads

Replies
90
Views
4K
Replies
0
Views
8K
Replies
31
Views
5K
Replies
14
Views
4K
Replies
445
Views
28K
Replies
19
Views
3K
Back
Top