Von Neumann QM Rules Equivalent to Bohm?

stevendaryl
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Messages
8,943
Reaction score
2,954
Bohm's deterministic theory was designed to be equivalent to standard QM, but what I'm not sure about is whether that includes Von Neumann's rules.

Von Neumann's rules for the evolution of the wave function are roughly described by:
  1. Between measurements, the wave function evolves according to Schrodinger's equation.
  2. If the wave function is \psi immediately before a measurement, then after measuring an observable O to have eigenvalue \lambda, the wave function will be equal to \psi', which is the result of projecting \psi onto the subspace of the Hilbert space where O has eigenvalue \lambda
I don't want to argue about whether the second process, called "wave function collapse", is physical, or just epistemological, or just a rule of thumb with no particular meaning. But what I do want to know is whether Von Neumann's rules are consistent with Bohmian mechanics.

In Bohmian mechanics, there is no collapse, because the particle is assumed to always have a definite position (and the assumption is made that any kind of measurement can be understood in terms of one or more position measurements). The wave function \psi has a double role: (1) |psi(x}|^2 gives the distribution of possible values for the position variable x, and (2) the wave function acts as a "pilot wave", affecting the trajectory of the particle.

The reason that I'm not certain about the equivalence of Bohmian mechanics and Von Neumann's QM is because after a measurement of position, according to Von Neumann, the wave function is now a delta-function (or at least is described by a highly localized function). But in Bohmian mechanics, there is no collapse, so the wave function continues to be whatever it was before the position measurement. So the two approaches--Von Neumann and Bohm--will be using different wave functions after the measurement. Those two situations don't sound equivalent to me.

Now, maybe it is that in Bohmian mechanics, the act of measuring position causes the wave function to collapse in the same way as Von Neumann, if we take into account the interaction of the particle with whatever device measured position. Is that the resolution?
 
Physics news on Phys.org
They are equivalent, provided one does rigourous QM. So after a position measurement, the state is not a delta function (it is not square integrable, so it is not an allowed wave function).

The collapse can be derived in Bohmian Mechanics. Essentially there is decoherence exactly as in Many-Worlds, but the conceptual subtleties are done away with by having the trajectory pick one of the worlds. If there is no recoherence (which is the condition for applying collapse in Copenhagen), then one can show that Copenhagen with collapse is a very good approximation to Bohmian Mechanics without collapse.

There's a discussion of this in VI.2 of this reference.

http://arxiv.org/abs/1206.1084
Overview of Bohmian Mechanics
Xavier Oriols, Jordi Mompart
 
Last edited:
  • Like
Likes bhobba and Demystifier
Von Neumann rules are compatible with Bohmian mechanics (BM). Namely, even though there is no true collapse in BM, there is an effective (illusionary) collapse which, for all practical purposes, cannot be distinguished from the true collapse.

How that illusionary effective collapse happens? Due to decoherence, the total wave function splits into non-overlapping branches, which is a deterministic continuous process described by the many-particle Schrodinger equation. Since the branches are non-overlapping, the Bohmian particle may enter only one of the branches. When the particle enters one of the branches, all other branches cease to have influence on the motion of the Bohmian particle. This is effectively the same as if other branches ceased to exist; they are still there, but now ineffective.

Now you have two descriptions: you can use only the non-empty channel (which corresponds to the collapsed wave function), or you can use all the channels (which corresponds to the wave function which did not collapse). As far as motion of the Bohmian particle is concerned, the two descriptions are equivalent.
 
Last edited:
  • Like
Likes bhobba and atyy
I object against von Neumann's 2nd rule. That's only true for ideal von Neumann filter measurements. Quite often the measured system is destroyed in the measuring procedure, and it doesn't make sense to describe it by a wave function for it in the sense of an isolated system. E.g., the accurately measured energies and momenta of the particles produced at the LHC hit the detectors and are gone thereafter. It doesn't make sense to associate a wave function to them anymore.
 
vanhees71 said:
E.g., the accurately measured energies and momenta of the particles produced at the LHC hit the detectors and are gone thereafter. It doesn't make sense to associate a wave function to them anymore.
But it still makes sense to associate a QFT state in the Hilbert space to them. For instance, the vacuum is also a state in the Hilbert space. Moreover, even if such QFT states cannot be represented by a wave function, they certainly can be represented by a wave functional. So the von Neumann rule still applies, but now with wave functionals instead of wave functions.
 
vanhees71 said:
I object against von Neumann's 2nd rule. That's only true for ideal von Neumann filter measurements. Quite often the measured system is destroyed in the measuring procedure, and it doesn't make sense to describe it by a wave function for it in the sense of an isolated system.

The Von Neumann rule is really most important for composite systems, so that measuring a property of one component causes the collapse of the wave function for another component.
 
atyy said:
They are equivalent, provided one does rigourous QM. So after a position measurement, the state is not a delta function (it is not square integrable, so it is not an allowed wave function).

I certainly agree that no actual measurement can result in a delta-function; instead, you make an imprecise measurement of position, and you end up with a localized wave function. But that distinction is not really relevant to my post.

The collapse can be derived in Bohmian Mechanics. Essentially there is decoherence exactly as in Many-Worlds, but the conceptual subtleties are done away with by having the trajectory pick one of the worlds. If there is no recoherence (which is the condition for applying collapse in Copenhagen), then one can show that Copenhagen with collapse is a very good approximation to Bohmian Mechanics without collapse.

So that sounds like Bohmian Mechanics is equivalent to Von Neumann via the no-collapse interpretations of QM. I guess that makes sense: Bohm is as compatible with collapse as MWI is.
 
That's true. You need it as projection postulate in this case, i.e., to get the state of the unmeasured subsystem you have to trace over the measured other part. This example however shows als the EPR problems with collapse interpretations!
 
Demystifier said:
But it still makes sense to associate a QFT state in the Hilbert space to them. For instance, the vacuum is also a state in the Hilbert space. Moreover, even if such QFT states cannot be represented by a wave function, they certainly can be represented by a wave functional. So the von Neumann rule still applies, but now with wave functionals instead of wave functions.
Well, think "state" everywhere in my posting, where I've written wave function. It's the same thing. The Higgs bosons measured at the LHC are long gone and not in an (approximate) eigenstate of energy and momentum. The same holds true for the decay products measured. So one must take von Neumann's assertions not too literal.
 
  • #10
vanhees71 said:
Well, think "state" everywhere in my posting, where I've written wave function. It's the same thing. The Higgs bosons measured at the LHC are long gone and not in an (approximate) eigenstate of energy and momentum. The same holds true for the decay products measured. So one must take von Neumann's assertions not too literal.
The bold part above is wrong. The Higgs which is "gone" is actually the Higgs field in the vacuum state, which is an exact eigenstate of energy and momentum (with eigenvalues equal to zero).

Concerning collapse, it looks as if you missed my point entirely, so let me be more explicit. Consider detection of a photon. There are two possibilities:

1) The photon is not detected and the detector remains in the ground state. The corresponding state in the Hilbert space is |photon>|ground>

2) The photon is detected (and hence destroyed) and the detector jumps to the excited state. The corresponding state in the Hilbert space is |0>|excited>

If each of the possibilities has some non-zero quantum probability to occur, then a purely unitary evolution of the quantum state actually gives a superposition a|photon>|ground>+b|0>|excited>. So you need a non-unitary collapse to pick up only one of these two terms in the superposition. By collapse, the state ends either in the state 1) or the state 2). In particular, if the photon is detected and destroyed, then the photon field is in the state |0>. The collapse describes a jump to the state |0>. Even though the photon is "destroyed", you still have a well defined state of the photon field.

The word here which should not be taken too literally is not so much the word "collapse", but the word "destroyed". Nothing is really destroyed; the system merely jumps to the ground state of the photon (or Higgs) field.

Let me finish by one technical remark. Even in QFT the state can be described by a wave function, provided that you allow wave function to depend on an infinite number of coordinates. In particular, the vacuum wave function is constant, not depending on x at all. For more details see e.g. the classic textbook
S.S. Schweber, An Introduction to Relativistic Quantum Field Theory (Eq. 80)
or my own paper
http://lanl.arxiv.org/abs/0904.2287 [Int.J.Mod.Phys.A25:1477-1505,2010]
 
Last edited:
  • #11
stevendaryl said:
I guess that makes sense: Bohm is as compatible with collapse as MWI is.
Exactly!
 
  • #12
I think, we talk about two different things here. I've to study your paper first, before I can say anything about it. Of course, you can formulate QFT in terms of "wave functionals". Another more modern textbook than Schweber treating this approach is the book by Hatfield.

What I meant is that von Neumann's formulation has to be taken with a grain of salt. He only treats very special cases of measurements. There's a whole industry of new developments concerning measurement theory since the mid 1930ies, which come much closer to the reality in labs. Von Neumann's merit for QT lies imho not so much in the physical interpretation (which I consider totally flawed since it's a nearly solipsistic overempasis of the collapse interpretation) but in the mathematical foundation of non-relativistic quantum theory in terms of Hilbert space theory. As far as I know, there's not yet an as mathematically strict definition of any realistic QFT, let alone the Standard Model.

Very puristically spoken, the Higgs boson (or any other instable particle) is not defined as an observable entity in relativistic QFT at all. There only asymptotic free states are well-defined and in fact what's measured at ATLAS and CMS are of course the stable (or quasi-stable as in the case of muons) final states (it was discovered in the two-photon and the 2-dilepton (electrons and muons) channels by ATLAS and CMS as famously announced on Independence Day 2012). Even those stable decay particles have been absorbed by the detectors, and it makes no sense to describe than as if you could take von Neumann's postulate 2 literally.
 
  • #13
vanhees71 said:
I think, we talk about two different things here.
Maybe, but then let us try to make clear what exactly that difference is.

vanhees71 said:
Of course, you can formulate QFT in terms of "wave functionals". Another more modern textbook than Schweber treating this approach is the book by Hatfield.
Note an important difference! Schweber talks about wave functions (depending on an infinite number of particle positions), while Hatfield talks about wave functionals (depending on entire field configurations).

vanhees71 said:
What I meant is that von Neumann's formulation has to be taken with a grain of salt. He only treats very special cases of measurements. There's a whole industry of new developments concerning measurement theory since the mid 1930ies, which come much closer to the reality in labs.
Von Neumann talks about projective measurements. Modern measurement theory talks about POVM measurements, which, in a certain sense, are more general than projective measurements. However, they are more general only when one wants to describe measurement without explicitly describing the environment and the measuring apparatus. By contrast, when the quantum state of the environment and the measuring apparatus is also taken into account, then all measurements can be described as projective (von Neumann) measurements.

vanhees71 said:
Von Neumann's merit for QT lies imho not so much in the physical interpretation (which I consider totally flawed since it's a nearly solipsistic overempasis of the collapse interpretation) but in the mathematical foundation of non-relativistic quantum theory in terms of Hilbert space theory. As far as I know, there's not yet an as mathematically strict definition of any realistic QFT, let alone the Standard Model.
Even if the collapse is ignored, another important von Neumann's merit for QT is understanding that quantum measurement creates entanglement with the measuring apparatus. This physical (not merely mathematical) insight is a basis of modern theory of decoherence, which, in turn, has a lot to do with the "illusion of collapse" even if the true collapse is never introduced explicitly.

Even if it is true that von Neumann overemphasized the collapse, it is even more true that most quantum-physics textbooks do not sufficiently emphasize the quantum role of the measuring apparatus and environment. That's probably because Bohr (unlike von Neumann) insisted that the macroscopic world should be described by classical physics, which misguided several generations of physicists.

vanhees71 said:
Even those stable decay particles have been absorbed by the detectors, and it makes no sense to describe than as if you could take von Neumann's postulate 2 literally.
I strongly disagree. The fact that they are absorbed does not imply that you cannot describe it by the von Neumann's 2nd postulate. In post #10 I have explained explicitly how you can do that. The collapse with "absorption" is neither more nor less "literal" than the collapse without the "absorption".
 
Last edited:
  • Like
Likes atyy
  • #14
vanhees71 said:
As far as I know, there's not yet an as mathematically strict definition of any realistic QFT, let alone the Standard Model.

What is the status of domain wall fermions and the standard model? Can a lattice standard model be constructed with domain wall fermions, at least in principle, even if it is too inefficient to simulate? Or is the answer still unknown?
 
  • #15
vanhees71 said:
I object against von Neumann's 2nd rule. That's only true for ideal von Neumann filter measurements. Quite often the measured system is destroyed in the measuring procedure, and it doesn't make sense to describe it by a wave function for it in the sense of an isolated system.
Mathematically, the Neumarks's theorem guarantees that for any non-ideal (=POVM) measurement in a Hilbert space of dimension n, there is a corressponding ideal (=filter=projective=von Neumann) measurement in a larger Hilbert space of dimension N>n. Physically, the larger Hilbert space corresponds to the inclusion of the environment and measuring apparatus in the quantum description. In other words, all measurements are ideal, provided that you include a sufficient number of degrees of freedom into your description.
 
  • #16
Mathematically it may make sense to assume that for each observable there exists an ideal-filter measurement. Then you should formulate the axiom in this way and not in the way given in the original posting. You find this formulation very often in textbooks, but it's confusing, at least it was for me for quite some time ;-).
 
  • #17
vanhees71 said:
Mathematically it may make sense to assume that for each observable there exists an ideal-filter measurement. Then you should formulate the axiom in this way and not in the way given in the original posting. You find this formulation very often in textbooks, but it's confusing, at least it was for me for quite some time ;-).

Yes, it's confused since it mixes the beliefs of different churches. But in this age of intolerance, it is ecumenical in spirit :)

http://www.quantiki.org/wiki/The_Church_of_the_larger_Hilbert_space
http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf
 
  • Like
Likes Demystifier, rootone and vanhees71
  • #18
vanhees71 said:
Mathematically it may make sense to assume that for each observable there exists an ideal-filter measurement. Then you should formulate the axiom in this way and not in the way given in the original posting. You find this formulation very often in textbooks, but it's confusing, at least it was for me for quite some time ;-).
So would you agree now that all measurements can effectively be described as a collapse, provided that it is a collapse in a Hilbert space which is usually larger than that of the measured observable?
 
  • #19
I don't consider collapse as a physical process, because this causes the old EPR troubles. What we have are local interactions of the system with a measurement apparatus which is approrpiately constructed to measure an observable, that's it. The only thing one has are the probabilities for the outcome of measurements given by a state, which I have associated to the system with an appropriate preparation procedure.
 
  • #20
vanhees71 said:
I don't consider collapse as a physical process, because this causes the old EPR troubles. What we have are local interactions of the system with a measurement apparatus which is appropriately constructed to measure an observable, that's it. The only thing one has are the probabilities for the outcome of measurements given by a state, which I have associated to the system with an appropriate preparation procedure.

When you say "the probabilities for the outcome of measurements", does that assume that there is a single outcome to a measurement? If there is only one outcome, then it seems to me that the picking of that outcome is a physical process.
 
  • Like
Likes Demystifier
  • #21
Sure, if you have a well-working measurement device the outcome of a measurement should be unique. If you measure the momentum of a particle with some detector, you get one value with a certain accuracy. Of course, it's a physical process making this possible, namely the interaction of the particle with the detector. That's the very definition of a measurement. All this does not imply anything like a collapse of the state, it's just an interaction of the particle with the apparatus.
 
  • #22
vanhees71 said:
Sure, if you have a well-working measurement device the outcome of a measurement should be unique. If you measure the momentum of a particle with some detector, you get one value with a certain accuracy. Of course, it's a physical process making this possible, namely the interaction of the particle with the detector. That's the very definition of a measurement. All this does not imply anything like a collapse of the state, it's just an interaction of the particle with the apparatus.

Well, if there are multiple possible outcomes before the measurement, and a single outcome after the measurement, then that seems to be a physical change. Unless "possible" is meant epistemologically--we don't know which outcomes are possible until measurement.
 
  • #23
Of course, there is a change, because the system is interacting with the measurement device, but why the heck must one call that a "collapse" and what is "collapsing"? Is something collapsing, because they tell the 6 numbers of the German Lotto game on Saturday afternoon? When are all these collapses occurring? When the Lotto numbers are figured out by using some random-number generator and when they are stored in the memory of the computer? When the numbers are realized by a human being (Bell once asked, whether it's enough to have an amoeba causing the first collapse in nature with regard to collapse interpretations which claim it necessary to even have a conscious being to take notice of a measurement result)? Or occur several million collapses whenever the TV watchers take notice of the numbers?...

Again, the collapse is unnecessary und confusing rather than helping in using quantum theory to describe the world!
 
  • #24
vanhees71 said:
Of course, there is a change, because the system is interacting with the measurement device, but why the heck must one call that a "collapse" and what is "collapsing"? Is something collapsing, because they tell the 6 numbers of the German Lotto game on Saturday afternoon?!

Well, if, through whatever mechanism, the results of the German Lotto game always gave results that were correlated with the results of the New York Lotto game, then I think people would suspect that either:
  1. The outcomes are influenced by some unknown common factor.
  2. One outcome influences the other remotely.
Possibility 1 would be considered comparable to a "hidden variables" theory, and possibility 2 would be considered comparable to a "collapse" theory.
 
  • #25
vanhees71 said:
Of course, there is a change, because the system is interacting with the measurement device, but why the heck must one call that a "collapse" and what is "collapsing"? Is something collapsing, because they tell the 6 numbers of the German Lotto game on Saturday afternoon?

Just to add to stevendaryl's point above. The technical difficulty is that there is no known way to write quantum collapse exactly as Bayesian updating (without introducing hidden variables). So yes, the collapse is needed as something extra in quantum theory. (You can call it whatever you like if you don't like "collapse", but that doesn't change the concept - it is a postulate that is needed to link the quantum formalism to Bayes's rule for the calculation of conditional probabilities.)
 
  • #26
vanhees71 said:
Of course, it's a physical process making this possible, namely the interaction of the particle with the detector.
It's easy to say so, but can you be more specific about the nature of such an interaction? For instance it is known that such interaction can not be described by the Schrodinger equation (or its QFT equivalent) alone. That's because the Schrodinger-like unitary evolution necessarily produces superpositions, (e.g. a cat in a superposition of dead and alive), while single outcomes need somehow to pick up only one of the terms in the superposition.
 
Last edited:
  • #27
stevendaryl said:
Well, if, through whatever mechanism, the results of the German Lotto game always gave results that were correlated with the results of the New York Lotto game, then I think people would suspect that either:
  1. The outcomes are influenced by some unknown common factor.
  2. One outcome influences the other remotely.
Possibility 1 would be considered comparable to a "hidden variables" theory, and possibility 2 would be considered comparable to a "collapse" theory.

Possibility 3: The setup is such that the drawing of the Lotto numbers in Germany and NY is done with an entangled system. This is very fair, because there's nothing known in nature which is more random than that, for sure better than any pseudorandom number generator in computers can ever be. It would only be unfair, if the Germans look at their drawing before the New Yorkers and then bet in NY on the then for them known winning numbers ;-)).

Also here, there's no collapse nor any "spooky action at a distance", if you don't postulate one. It's the preparation of the system with entangled subsystem which "imprints" the correlations described by this entanglement. It's not the measurement at A which causes the result at B and vice versa. This is only so if you insist on a collapse, which has imho no basis in any observation made so far. With the minimal interpretation everything is consistent and no EPR problems with causility occur.
 
  • #28
atyy said:
Just to add to stevendaryl's point above. The technical difficulty is that there is no known way to write quantum collapse exactly as Bayesian updating (without introducing hidden variables). So yes, the collapse is needed as something extra in quantum theory. (You can call it whatever you like if you don't like "collapse", but that doesn't change the concept - it is a postulate that is needed to link the quantum formalism to Bayes's rule for the calculation of conditional probabilities.)
Where do you need a collapse here? I just measure, e.g., a spin component (to have the simple case of a discrete observable) and take notice of the result. If you have a filter measurement (the usually discussed Stern-Geralch apparati are such), I filter out all partial beams I don't want and am left with a polarized beam with in the spin state I want. That's all. I don't need a collapse. The absorption of the unwanted partial beams are due to local interactions of the particles with the absorber. There's no collapse!
 
  • #29
Demystifier said:
It's easy to say so, but can you be more specific about the nature of such an interaction? For instance it is known that such interaction can not be described by the Schrodinger equation (or its QFT equivalent) alone. That's because the Schrodinger-like unitary evolution necessarily produces superpositions, (e.g. a cat in a superposition of dead and alive), while single outcomes need somehow to pick up only one of the terms in the superposition.
The non-unitarity comes in, because you project to the relevant macroscopic observables (coarse-graining). The paradigmatic example is, how you go from the full quantum evolution of a single-particle distribution function (Kadanoff-Baym equation) to macroscopic equations (transport equations). Only the latter lead to entropy production and thus irreversibility. The non-unitarity is emergent and not due to the underlying exact equations which you never can solve (or observe!) because of the complexity of a detailed microscopic state of a macroscopic system, and as was stressed rightly already by Bohr, a measurement device must be macroscopic!
 
  • #30
vanhees71 said:
The non-unitarity comes in, because you project to the relevant macroscopic observables (coarse-graining). The paradigmatic example is, how you go from the full quantum evolution of a single-particle distribution function (Kadanoff-Baym equation) to macroscopic equations (transport equations). Only the latter lead to entropy production and thus irreversibility. The non-unitarity is emergent and not due to the underlying exact equations which you never can solve (or observe!) because of the complexity of a detailed microscopic state of a macroscopic system, and as was stressed rightly already by Bohr, a measurement device must be macroscopic!
I find this explanation very similar to atyy's description of collapse in terms of the Heisenberg (macro/micro) cut. Saying that the non-unitarity is emergent from the micro system-macro apparatus interaction certainly seems to follow the Copenhagen spirit as far as I can see.
 
  • #31
Sure, there are Copenhagen flavors without collapse. Whether or not you call the minimal interpretation Copenhagen or not, is a matter of taste. I don't feel fit to answer whether Bohr is a "minimal interpreter" or not. For that, I'd have to dive into the original papers written by Bohr, and that's no fun to read. Bohr has too many words and not enough equations for my taste ;-)). Heisenberg is also a pretty difficult case. His interpretation seems not to be exactly the same as Bohrs, as can be seen from the famous correction of his first paper concerning the uncertainty relation by Bohr, which is very important in this context: Heisenberg claimed that his uncertainty relation says that you cannot measure (!) position and momentum simultaneously (!) on one system, while Bohr (in my opinion more correctly) says that the particle cannot prepared such that its position and momentum are determined better than allowed by the uncertainty relation.

Of course, another important point of interpretation of QT indeed is that in the microscopic realm you cannot measure quantities without disturbing the system to some minimal extent. This reaches far into the fundamental operational definitions of the observables. E.g., classically you define the electric field of a charge distribution by the (instantaneous) force acting on a test charge, where the test charge is meant to make the limit ##q_{\text{test}} \rightarrow 0## such that you don't disturb the charge distribution whose field you want to measure by the interaction with the test charge. Now, if you want to do so for a single electron, you cannot do that anymore, since there are no test charges smaller than one elementary charge you could use. This disturbance-measurement uncertainties, however, are not what's described by the Heisenberg-Robertson uncertainty relations but are (as far as I know) still under debate by the experts.

There's a posting by me about one such relation and its realization somewhere on PF, which was never discussed, for what reason ever!

https://www.physicsforums.com/threa...elation-vs-noise-disturbance-measures.664972/
 
  • #32
Demystifier said:
It's easy to say so, but can you be more specific about the nature of such an interaction? For instance it is known that such interaction can not be described by the Schrodinger equation (or its QFT equivalent) alone. That's because the Schrodinger-like unitary evolution necessarily produces superpositions, (e.g. a cat in a superposition of dead and alive), while single outcomes need somehow to pick up only one of the terms in the superposition.
Would you agree that collapse can be described by a Lindblad equation? If so, then one can take ones algebra of observables and define an algebraic state on it by ##\omega(A)=\mathrm{Tr}(\rho A)## and the trace preserving time evolution by the Lindblad equation defines a stable *-automorphism ##\alpha_t## on the algebra of observables. One can then compute the GNS Hilbert space ##\mathcal{H}_\omega## for ##\omega## and then there is a theorem that let's us represent ##\alpha_t## by unitary operators. So one can represent the collapse by a unitary evolution by sacrificing the irreducibilty of the representation.
 
  • #33
Uups. Can you translate this for a poor theoretical physicist into physics? Taking the trace could mean, what I describe as "coarse-graining". You seem to define an expectation value as the observable. Is this right? If so then it seems to go into the direction, I mean: You take a macroscopic ("classical") observable (like a pointer position of some measurement device) as the expectation value averaged over many microscopic degrees of freedom. This classical "pointer state" then can, if the measurement procedure is appropriate for the observable on the quantum system you want to measure, provide the measurement of this observable. The paradigmatic example, which can be (even nearly analytically) analyzed fully quantum mechanically is the Stern-Gerlach experiment: The position of the silver atoms as measured by letting them hit a photoplate, leaving well-distinguishable marks for spin-up and spin-down polarized atoms: A macroscopic observable (a blackened grain in the photo plate) is accurate enough to resolve a miscroscopic quantity (the spin-z component of a silver atom). There's no collapse necessary. The pattern left by the silver atoms on the photo plate is completely describable by solving the time-dependent Schrödinger equation and using Born's rule for its interpretation!
 
  • #34
I have never thought much about the physical interpretation of this purely mathematical fact, but now that you wrote this, it seems like it would be exactly the right interpretation. The GNS Hilbert space will contain more degrees of freedom and one could certainly try to interpret them as some pointer degrees of freedom. Unfortunately, I will only have time to explain it in more details in the evening. For the meantime (if you can't wait :smile:), I recommend Strocchi's book on the matter.
 
  • #35
rubi said:
Would you agree that collapse can be described by a Lindblad equation?
Yes, but not in a way which would be compatible with unitary evolution for a larger system.
 
  • #36
vanhees71 said:
The non-unitarity comes in, because you project to the relevant macroscopic observables (coarse-graining).
Yes, but not a kind of non-unitarity which could pick up only one of the terms in the superposition. Such coarse-graining leads to decoherence, which induces a transition from a coherent to an incoherent superposition. The density matrix evolves from a pure state to a mixed state, i.e. from a non-diagonal matrix to a diagonal one. But the diagonal matrix still has more than one non-vanishing component on the diagonal (e.g. one corresponding to the dead cat and another to the alive cat), so the system still does not pick up only one of the possibilities.

To really get only one of the possibilities from this you need to assume something additional (for example a collapse, or some hidden variables, or many worlds), but you, as adherent of a minimal statistical ensemble interpretation, refuse to take any specific additional assumption. Yes, by accepting such a minimal interpretation you avoid unjustified speculations, but the problem is that such a minimal interpretation leaves some questions unanswered. For me, it's more honnest to risk with a possibly wrong answer (including collapse) than to pretend that there is no question.
 
Last edited:
  • #37
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position? Quantum theory predicts the probability for this to happen, not more and not less. So why would you introduce a collapse to "explain" something which is not explainable within the theory (because there's no cause for the particle to end up at the observed position within QT, which only states probabilities for this to happen) but at the cost of introducing inconsistencies of the theory? And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
 
  • #38
vanhees71 said:
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position?
Nothing, that's enough for measurement. But measurement is not an explanation.

vanhees71 said:
Quantum theory predicts the probability for this to happen, not more and not less.
Exactly!

vanhees71 said:
So why would you introduce a collapse to "explain" something which is not explainable within the theory
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.

vanhees71 said:
but at the cost of introducing inconsistencies of the theory?
New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

vanhees71 said:
And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
 
Last edited:
  • #39
rubi said:
Would you agree that collapse can be described by a Lindblad equation? If so, then one can take ones algebra of observables and define an algebraic state on it by ##\omega(A)=\mathrm{Tr}(\rho A)## and the trace preserving time evolution by the Lindblad equation defines a stable *-automorphism ##\alpha_t## on the algebra of observables. One can then compute the GNS Hilbert space ##\mathcal{H}_\omega## for ##\omega## and then there is a theorem that let's us represent ##\alpha_t## by unitary operators. So one can represent the collapse by a unitary evolution by sacrificing the irreducibilty of the representation.

Is the final equation on http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form what you mean by the Lindblad equation? Does the Lindblad equation only include trace-preserving maps? I think collapse is not trace preserving.
 
  • #40
atyy said:
I think collapse is not trace preserving.
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.
 
  • #41
Demystifier said:
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.

Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
 
Last edited:
  • #42
atyy said:
Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
Ah, you are talking about Lindblad equation in a narrower context, in which it is derived from unitary evolution in the larger Hilbert space. I was talking about Lindblad equation in a wider context, in which it does not necessarily need to be derived from a unitary evolution in the larger space. In such a wider context there is no Eq. (8.28).
 
  • #43
So maybe rubi and you are talking about different Lindblad equations in posts #32 and #35?
 
  • #44
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.
 
  • #45
Demystifier said:
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.

So perhaps the criticism then is that even from the wider point of view, no matter what tricks one uses to get the whole system to evolve deterministically and unitarily, the Lindblad equation does not include collapse because

(1) collapse is probabilistic time evolution

(2) rubi cannot calculate from the Lindblad equation the joint probability of being at position A at tA and at position B at tB, which is what collapse allows one to do.
 
Last edited:
  • Like
Likes Demystifier
  • #46
Demystifier said:
Nothing, that's enough for measurement. But measurement is not an explanation.
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.
There's a very big difference between the introduction of neutrino masses into the Standard Model and the assumption of a collapse in QT. Neutrino oscillations are an observed fact and you have to introduce neutrino masses into the standard model (which is possible btw. without destroying the (perturbative) consistency of this model). To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

QFT is a pretty successful model (although not strictly consistent one must admit) which can be defined in an approximate sense (perturbative QFT with renormalization, which even has a physical interpretation thanks to Kadanoff and Wilson).

That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently). E.g., it doesn't make sense to ask, whether a "particle" (better say "quantum" here) has a "shape" at all within QT. You can only describe the probability of the outcome of concrete measurements (observables), which are defined as (an equivlance class) of measurement procedures.

Yesterday, someone quoted the book

F. Strocci, An introduction to the mathematical structure of Quantum Mechanics, World Scientific (2005)

This is one of the best exhibitions of QT, I've seen for years, although everything is unfortunately hidden behind quite formal mathematics, but that's the essence of QT without any superfluous additions which cause only trouble!
 
  • #47
vanhees71 said:
To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

vanhees71 said:
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently).

The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

By citing EPR as an objection against collapse, it shows that you believe ontology is important in physics. It means that you believe that in special relativity, the cause of an event should be in its past light cone.
 
Last edited:
  • #48
atyy said:
The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

As far as I can tell, most of your objections to the standard collapse are because you believe there should be an ontology in physics (the cause of an event should be in its past light cone), and you believe that standard collapse causes trouble for your ontology, which is why you reject it. Citing EPR as a reason not to believe in collapse means that you believe that ontology is important in physics.
It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
 
  • #49
vanhees71 said:
Uups. Can you translate this for a poor theoretical physicist into physics?
The idea of the algebraic framework is to extract the relevant part of QM (observable facts) and get rid of the mathematical parts that have no relevance (like the choice of a Hilbert space). In QM, we are interested in the behaviour of certain sets of observables (position, momenum, ...) and these observables form an algebra (they can be multiplied for example). A state of a system tells us all physical information that can be extracted in principle (like expectation values, probabilities, ...). In QM, we usually have a Hilbert space with operators and a state is determined by a vector ##\Psi##. Expectation values are given by ##\left<A\right>=\left<\Psi,A\Psi\right>##. A state could also be given by a density matrix ##\rho## and the expectation values would be ##\left<A\right>=\mathrm{Tr}(\rho A)##. So the expectation value functional takes an observable and spits out a number (the expectation value). Now there is a mathematical theorem (GNS) that says then when we have a certain algebra (of observables) and know all the expectation values of these observables, then we can reconstruct a Hilbert space ##\mathcal H##, a representation ##\pi## of the algebra and a vector ##\Omega##, such that the expectation values are given by ##\left<A\right> = \left<\Omega,\pi(A)\Omega\right>##. (The expectation value functional is usually denoted by ##\omega(A)## rather than ##\left<A\right>##.) But that also means that even if we have an algebra of observables and a state given by a density matrix, we can construct a new Hilbert space such that the state that was formerly given by a density matrix now is a plain old vector state (##\Omega##): We just use our old algebra as the algebra and the "algebraic state" ##\omega(A)=\mathrm{Tr}(\rho A)## as the expectation value functional and apply the theorem. (It constructs the new Hilbert space and the new representation of the algebra explicitely.)

Now what does that look like concretely? Let's say we have an algebra of observables ##\mathfrak A## on a concrete Hilbert space ##\mathcal H## and a density matrix ##\rho## on ##\mathcal H##. The density matrix can always be written as ##\rho=\sum_n \rho_n b_n \left<b_n,\cdot\right>##, where ##(b_n)_n## is an ONB for ##\mathcal H##. We can now define a new Hilbert space ##\mathcal H' = \bigoplus_n\mathcal H##, a representation ##\pi(A) (\bigoplus_n v_n) = \bigoplus_n A v_n## and a vector ##\Omega_\rho = \bigoplus_n \sqrt{\rho_n} b_n##. We can verify that we get the same expectation value as before: ##\mathrm{Tr}(\rho A) = \left<\Omega_\rho,\pi(A)\Omega_\rho\right>##. Every density matrix on ##\mathcal H## can be represented this way by a normalized vector ##\Omega_\rho## in ##\mathcal H'## and since they are normalized, they are related by unitary transformations. So if one has two density matrices ##\rho(t_1)## and ##\rho(t_2)## in ##\mathcal H##, there is a unitary operator ##U(t_2,t_1)## in ##\mathcal H'## such that ##\Omega_{\rho(t_2)}=U(t_2,t_1)\Omega_{\rho(t_1)}##.

Edit: I should probably add what the inner product on ##\mathcal H'## is: ##\left<\bigoplus_n v_n, \bigoplus_n w_n\right>_{\mathcal H'} = \sum_n\left<v_n,w_n\right>_{\mathcal H}##
 
Last edited:
  • #50
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
But I don't understand why you associate collapse(or call it non-unitary measurement evolution postulate since you don't like the word collapse, this is how is called in my QM notes that never mention the word collapse) with breaking of QFT microcausality. In the sense is used here that doesn't take the wavefunction as something real(like in the ensemble interpretation you subscribe to) there is no FTL or anything like that implied.
 
Last edited:
Back
Top