Von Neumann QM Rules Equivalent to Bohm?

  • #31
Sure, there are Copenhagen flavors without collapse. Whether or not you call the minimal interpretation Copenhagen or not, is a matter of taste. I don't feel fit to answer whether Bohr is a "minimal interpreter" or not. For that, I'd have to dive into the original papers written by Bohr, and that's no fun to read. Bohr has too many words and not enough equations for my taste ;-)). Heisenberg is also a pretty difficult case. His interpretation seems not to be exactly the same as Bohrs, as can be seen from the famous correction of his first paper concerning the uncertainty relation by Bohr, which is very important in this context: Heisenberg claimed that his uncertainty relation says that you cannot measure (!) position and momentum simultaneously (!) on one system, while Bohr (in my opinion more correctly) says that the particle cannot prepared such that its position and momentum are determined better than allowed by the uncertainty relation.

Of course, another important point of interpretation of QT indeed is that in the microscopic realm you cannot measure quantities without disturbing the system to some minimal extent. This reaches far into the fundamental operational definitions of the observables. E.g., classically you define the electric field of a charge distribution by the (instantaneous) force acting on a test charge, where the test charge is meant to make the limit ##q_{\text{test}} \rightarrow 0## such that you don't disturb the charge distribution whose field you want to measure by the interaction with the test charge. Now, if you want to do so for a single electron, you cannot do that anymore, since there are no test charges smaller than one elementary charge you could use. This disturbance-measurement uncertainties, however, are not what's described by the Heisenberg-Robertson uncertainty relations but are (as far as I know) still under debate by the experts.

There's a posting by me about one such relation and its realization somewhere on PF, which was never discussed, for what reason ever!

https://www.physicsforums.com/threa...elation-vs-noise-disturbance-measures.664972/
 
Physics news on Phys.org
  • #32
Demystifier said:
It's easy to say so, but can you be more specific about the nature of such an interaction? For instance it is known that such interaction can not be described by the Schrodinger equation (or its QFT equivalent) alone. That's because the Schrodinger-like unitary evolution necessarily produces superpositions, (e.g. a cat in a superposition of dead and alive), while single outcomes need somehow to pick up only one of the terms in the superposition.
Would you agree that collapse can be described by a Lindblad equation? If so, then one can take ones algebra of observables and define an algebraic state on it by ##\omega(A)=\mathrm{Tr}(\rho A)## and the trace preserving time evolution by the Lindblad equation defines a stable *-automorphism ##\alpha_t## on the algebra of observables. One can then compute the GNS Hilbert space ##\mathcal{H}_\omega## for ##\omega## and then there is a theorem that let's us represent ##\alpha_t## by unitary operators. So one can represent the collapse by a unitary evolution by sacrificing the irreducibilty of the representation.
 
  • #33
Uups. Can you translate this for a poor theoretical physicist into physics? Taking the trace could mean, what I describe as "coarse-graining". You seem to define an expectation value as the observable. Is this right? If so then it seems to go into the direction, I mean: You take a macroscopic ("classical") observable (like a pointer position of some measurement device) as the expectation value averaged over many microscopic degrees of freedom. This classical "pointer state" then can, if the measurement procedure is appropriate for the observable on the quantum system you want to measure, provide the measurement of this observable. The paradigmatic example, which can be (even nearly analytically) analyzed fully quantum mechanically is the Stern-Gerlach experiment: The position of the silver atoms as measured by letting them hit a photoplate, leaving well-distinguishable marks for spin-up and spin-down polarized atoms: A macroscopic observable (a blackened grain in the photo plate) is accurate enough to resolve a miscroscopic quantity (the spin-z component of a silver atom). There's no collapse necessary. The pattern left by the silver atoms on the photo plate is completely describable by solving the time-dependent Schrödinger equation and using Born's rule for its interpretation!
 
  • #34
I have never thought much about the physical interpretation of this purely mathematical fact, but now that you wrote this, it seems like it would be exactly the right interpretation. The GNS Hilbert space will contain more degrees of freedom and one could certainly try to interpret them as some pointer degrees of freedom. Unfortunately, I will only have time to explain it in more details in the evening. For the meantime (if you can't wait :smile:), I recommend Strocchi's book on the matter.
 
  • #35
rubi said:
Would you agree that collapse can be described by a Lindblad equation?
Yes, but not in a way which would be compatible with unitary evolution for a larger system.
 
  • #36
vanhees71 said:
The non-unitarity comes in, because you project to the relevant macroscopic observables (coarse-graining).
Yes, but not a kind of non-unitarity which could pick up only one of the terms in the superposition. Such coarse-graining leads to decoherence, which induces a transition from a coherent to an incoherent superposition. The density matrix evolves from a pure state to a mixed state, i.e. from a non-diagonal matrix to a diagonal one. But the diagonal matrix still has more than one non-vanishing component on the diagonal (e.g. one corresponding to the dead cat and another to the alive cat), so the system still does not pick up only one of the possibilities.

To really get only one of the possibilities from this you need to assume something additional (for example a collapse, or some hidden variables, or many worlds), but you, as adherent of a minimal statistical ensemble interpretation, refuse to take any specific additional assumption. Yes, by accepting such a minimal interpretation you avoid unjustified speculations, but the problem is that such a minimal interpretation leaves some questions unanswered. For me, it's more honnest to risk with a possibly wrong answer (including collapse) than to pretend that there is no question.
 
Last edited:
  • #37
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position? Quantum theory predicts the probability for this to happen, not more and not less. So why would you introduce a collapse to "explain" something which is not explainable within the theory (because there's no cause for the particle to end up at the observed position within QT, which only states probabilities for this to happen) but at the cost of introducing inconsistencies of the theory? And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
 
  • #38
vanhees71 said:
If a particle hits a photoplate it leaves a spot there. So what else do you need to measure the particles position?
Nothing, that's enough for measurement. But measurement is not an explanation.

vanhees71 said:
Quantum theory predicts the probability for this to happen, not more and not less.
Exactly!

vanhees71 said:
So why would you introduce a collapse to "explain" something which is not explainable within the theory
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.

vanhees71 said:
but at the cost of introducing inconsistencies of the theory?
New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

vanhees71 said:
And it's also not clear to me, what the collapse explains, because it doesn't provide an explanation either, why the specific particle hits the specific spot on the photo plate. So what is what it does explain?
That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
 
Last edited:
  • #39
rubi said:
Would you agree that collapse can be described by a Lindblad equation? If so, then one can take ones algebra of observables and define an algebraic state on it by ##\omega(A)=\mathrm{Tr}(\rho A)## and the trace preserving time evolution by the Lindblad equation defines a stable *-automorphism ##\alpha_t## on the algebra of observables. One can then compute the GNS Hilbert space ##\mathcal{H}_\omega## for ##\omega## and then there is a theorem that let's us represent ##\alpha_t## by unitary operators. So one can represent the collapse by a unitary evolution by sacrificing the irreducibilty of the representation.

Is the final equation on http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form what you mean by the Lindblad equation? Does the Lindblad equation only include trace-preserving maps? I think collapse is not trace preserving.
 
  • #40
atyy said:
I think collapse is not trace preserving.
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.
 
  • #41
Demystifier said:
It is, because the collapse is not only picking one term in the superposition, but also includes the appropriate change of the normalization of that term.

Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
 
Last edited:
  • #42
atyy said:
Yes, one can define it that way too, but in that case, it is also true that the evolution of the larger system cannot be unitary and deterministic.

Anyway, for the definition of collapse as trace non-preserving, I was thinking of the language used in Nielsen and Chuang around their Eq 8.28. The relationship between definitions in which collapse is defined as trace non-preserving or trace preserving is given in their exercise 8.8, in which one introduces an additional operator.

So if all the measurement operators sum to 1, which is how it's discussed in http://en.wikiversity.org/wiki/Open_Quantum_Systems/The_Lindblad_Form, I think it is the case that collapse is not trace-preserving.
Ah, you are talking about Lindblad equation in a narrower context, in which it is derived from unitary evolution in the larger Hilbert space. I was talking about Lindblad equation in a wider context, in which it does not necessarily need to be derived from a unitary evolution in the larger space. In such a wider context there is no Eq. (8.28).
 
  • #43
So maybe rubi and you are talking about different Lindblad equations in posts #32 and #35?
 
  • #44
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.
 
  • #45
Demystifier said:
Maybe. But then the answer to his question would be a "trivial" no, so that's why I assumed that he had a wider point of view.

So perhaps the criticism then is that even from the wider point of view, no matter what tricks one uses to get the whole system to evolve deterministically and unitarily, the Lindblad equation does not include collapse because

(1) collapse is probabilistic time evolution

(2) rubi cannot calculate from the Lindblad equation the joint probability of being at position A at tA and at position B at tB, which is what collapse allows one to do.
 
Last edited:
  • Like
Likes Demystifier
  • #46
Demystifier said:
Nothing, that's enough for measurement. But measurement is not an explanation.
This is like asking, for instance, why would you introduce neutrino masses to explain observed neutrino oscillations if the oscillations are not explainable by the standard model of massless neutrinos? New ideas in physics are introduced precisely because some phenomena are not explainable within old theories.
There's a very big difference between the introduction of neutrino masses into the Standard Model and the assumption of a collapse in QT. Neutrino oscillations are an observed fact and you have to introduce neutrino masses into the standard model (which is possible btw. without destroying the (perturbative) consistency of this model). To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

New theories often look inconsistent at first (e.g. UV divergences in QFT), but then the job of scientists is to further develop the theory to remove the inconsistencies.

QFT is a pretty successful model (although not strictly consistent one must admit) which can be defined in an approximate sense (perturbative QFT with renormalization, which even has a physical interpretation thanks to Kadanoff and Wilson).

That's a much better question. The role of collapse is not so much to explain something, but to offer a possible ontology behind the measured phenomena. In the collapse picture, the wave function is not merely a probability, but an actual physical thing that exists at the level of a single object. Suppose I ask you how does an electron look like before I measure it? Before I measure it, is it a wave or a particle? Does it have any shape at all before I measure it? With standard minimal QM you cannot answer such questions. With a collapse picture you can. You may say those are philosophical questions, but sometimes thinking about philosophical questions may eventually lead to new measurable predictions. For example, the GRW theory of collapse leads to new predictions which seem to be ruled out by experiments. There are also other collapse theories which are not (yet) ruled out.
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently). E.g., it doesn't make sense to ask, whether a "particle" (better say "quantum" here) has a "shape" at all within QT. You can only describe the probability of the outcome of concrete measurements (observables), which are defined as (an equivlance class) of measurement procedures.

Yesterday, someone quoted the book

F. Strocci, An introduction to the mathematical structure of Quantum Mechanics, World Scientific (2005)

This is one of the best exhibitions of QT, I've seen for years, although everything is unfortunately hidden behind quite formal mathematics, but that's the essence of QT without any superfluous additions which cause only trouble!
 
  • #47
vanhees71 said:
To the contrary there's no observed fact which would force me to introduce a collapse and, in addition, the introduction of this idea is very problematic (EPR!). So while I'm practically forced to introduce neutrino masses to adequately describe the observed fact of mixing, there's no necessity to bother oneself is unobserved and unnecessary collapses in QT!

vanhees71 said:
Since when do you need an "ontology" in physics? Physics is about the description of objective observable facts about nature and not to provide an ontology (although famous people like Einstein opposed this view vehemently).

The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

By citing EPR as an objection against collapse, it shows that you believe ontology is important in physics. It means that you believe that in special relativity, the cause of an event should be in its past light cone.
 
Last edited:
  • #48
atyy said:
The collapse, or an equivalent assumption, is necessary in quantum mechanics, and its predictions have been verified experimentally.
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

However, one should be clear that the standard collapse is not intended to provide an ontology, unlike the GRW collapse. Throughout most of this thread, including the OP, the collapse is the standard collapse, not the GRW collapse.

As far as I can tell, most of your objections to the standard collapse are because you believe there should be an ontology in physics (the cause of an event should be in its past light cone), and you believe that standard collapse causes trouble for your ontology, which is why you reject it. Citing EPR as a reason not to believe in collapse means that you believe that ontology is important in physics.
It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
 
  • #49
vanhees71 said:
Uups. Can you translate this for a poor theoretical physicist into physics?
The idea of the algebraic framework is to extract the relevant part of QM (observable facts) and get rid of the mathematical parts that have no relevance (like the choice of a Hilbert space). In QM, we are interested in the behaviour of certain sets of observables (position, momenum, ...) and these observables form an algebra (they can be multiplied for example). A state of a system tells us all physical information that can be extracted in principle (like expectation values, probabilities, ...). In QM, we usually have a Hilbert space with operators and a state is determined by a vector ##\Psi##. Expectation values are given by ##\left<A\right>=\left<\Psi,A\Psi\right>##. A state could also be given by a density matrix ##\rho## and the expectation values would be ##\left<A\right>=\mathrm{Tr}(\rho A)##. So the expectation value functional takes an observable and spits out a number (the expectation value). Now there is a mathematical theorem (GNS) that says then when we have a certain algebra (of observables) and know all the expectation values of these observables, then we can reconstruct a Hilbert space ##\mathcal H##, a representation ##\pi## of the algebra and a vector ##\Omega##, such that the expectation values are given by ##\left<A\right> = \left<\Omega,\pi(A)\Omega\right>##. (The expectation value functional is usually denoted by ##\omega(A)## rather than ##\left<A\right>##.) But that also means that even if we have an algebra of observables and a state given by a density matrix, we can construct a new Hilbert space such that the state that was formerly given by a density matrix now is a plain old vector state (##\Omega##): We just use our old algebra as the algebra and the "algebraic state" ##\omega(A)=\mathrm{Tr}(\rho A)## as the expectation value functional and apply the theorem. (It constructs the new Hilbert space and the new representation of the algebra explicitely.)

Now what does that look like concretely? Let's say we have an algebra of observables ##\mathfrak A## on a concrete Hilbert space ##\mathcal H## and a density matrix ##\rho## on ##\mathcal H##. The density matrix can always be written as ##\rho=\sum_n \rho_n b_n \left<b_n,\cdot\right>##, where ##(b_n)_n## is an ONB for ##\mathcal H##. We can now define a new Hilbert space ##\mathcal H' = \bigoplus_n\mathcal H##, a representation ##\pi(A) (\bigoplus_n v_n) = \bigoplus_n A v_n## and a vector ##\Omega_\rho = \bigoplus_n \sqrt{\rho_n} b_n##. We can verify that we get the same expectation value as before: ##\mathrm{Tr}(\rho A) = \left<\Omega_\rho,\pi(A)\Omega_\rho\right>##. Every density matrix on ##\mathcal H## can be represented this way by a normalized vector ##\Omega_\rho## in ##\mathcal H'## and since they are normalized, they are related by unitary transformations. So if one has two density matrices ##\rho(t_1)## and ##\rho(t_2)## in ##\mathcal H##, there is a unitary operator ##U(t_2,t_1)## in ##\mathcal H'## such that ##\Omega_{\rho(t_2)}=U(t_2,t_1)\Omega_{\rho(t_1)}##.

Edit: I should probably add what the inner product on ##\mathcal H'## is: ##\left<\bigoplus_n v_n, \bigoplus_n w_n\right>_{\mathcal H'} = \sum_n\left<v_n,w_n\right>_{\mathcal H}##
 
Last edited:
  • #50
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).
But I don't understand why you associate collapse(or call it non-unitary measurement evolution postulate since you don't like the word collapse, this is how is called in my QM notes that never mention the word collapse) with breaking of QFT microcausality. In the sense is used here that doesn't take the wavefunction as something real(like in the ensemble interpretation you subscribe to) there is no FTL or anything like that implied.
 
Last edited:
  • #51
vanhees71 said:
You keep repeating this every time, but I've not seen a single example for such an experimental observation, which would imply that either Einstein causality or QT must be wrong. Before I believe either of this, I need a very convincing experimental evidence for a collapse!

The bell tests need collapse, when the Alice's and Bob's observations are timelike separated or they are calculated in a frame in which the their observations are not simultaneous.

vanhees71 said:
It causes trouble not for any whatever-logy but to the overwhelming evidence for the correctness of the relativistic space-time for all (at least all local) observations made so far. Either you believe in the existence of a collapse or Einstein causality and locality. The most successful model ever, the Standard Model of elementary particle physics, obeys both. one doesn't need a collapse to derive all observable predictions of it, and these predictions are validated by all observations made so far (to the dismay of the particle theorists, who'd like to find evidence for physics beyond the standard model in order to see how to overcome some of its difficulties, including the hierarchy problem and the description of dark matter to find a hint, where to look for direct evidence of what it is made of).

Collapse causes no trouble to relativistic spacetime, unless one believes in a special relativistic ontology. Einstein causality and locality are beliefs in ontology. Special relativity does not require Einstein causality and locality - that is one of great lessons of collapse.
 
  • Like
Likes TrickyDicky
  • #52
rubi said:
The idea of the algebraic framework is to extract the relevant part of QM (observable facts) and get rid of the mathematical parts that have no relevance (like the choice of a Hilbert space). In QM, we are interested in the behaviour of certain sets of observables (position, momenum, ...) and these observables form an algebra (they can be multiplied for example). A state of a system tells us all physical information that can be extracted in principle (like expectation values, probabilities, ...). In QM, we usually have a Hilbert space with operators and a state is determined by a vector ##\Psi##. Expectation values are given by ##\left<A\right>=\left<\Psi,A\Psi\right>##. A state could also be given by a density matrix ##\rho## and the expectation values would be ##\left<A\right>=\mathrm{Tr}(\rho A)##. So the expectation value functional takes an observable and spits out a number (the expectation value). Now there is a mathematical theorem (GNS) that says then when we have a certain algebra (of observables) and know all the expectation values of these observables, then we can reconstruct a Hilbert space ##\mathcal H##, a representation ##\pi## of the algebra and a vector ##\Omega##, such that the expectation values are given by ##\left<A\right> = \left<\Omega,\pi(A)\Omega\right>##. (The expectation value functional is usually denoted by ##\omega(A)## rather than ##\left<A\right>##.) But that also means that even if we have an algebra of observables and a state given by a density matrix, we can construct a new Hilbert space such that the state that was formerly given by a density matrix now is a plain old vector state (##\Omega##): We just use our old algebra as the algebra and the "algebraic state" ##\omega(A)=\mathrm{Tr}(\rho A)## as the expectation value functional and apply the theorem. (It constructs the new Hilbert space and the new representation of the algebra explicitely.)

Now what does that look like concretely? Let's say we have an algebra of observables ##\mathfrak A## on a concrete Hilbert space ##\mathcal H## and a density matrix ##\rho## on ##\mathcal H##. The density matrix can always be written as ##\rho=\sum_n \rho_n b_n \left<b_n,\cdot\right>##, where ##(b_n)_n## is an ONB for ##\mathcal H##. We can now define a new Hilbert space ##\mathcal H' = \bigoplus_n\mathcal H##, a representation ##\pi(A) (\bigoplus_n v_n) = \bigoplus_n A v_n## and a vector ##\Omega_\rho = \bigoplus_n \sqrt{\rho_n} b_n##. We can verify that we get the same expectation value as before: ##\mathrm{Tr}(\rho A) = \left<\Omega_\rho,\pi(A)\Omega_\rho\right>##. Every density matrix on ##\mathcal H## can be represented this way by a normalized vector ##\Omega_\rho## in ##\mathcal H'## and since they are normalized, they are related by unitary transformations. So if one has two density matrices ##\rho(t_1)## and ##\rho(t_2)## in ##\mathcal H##, there is a unitary operator ##U(t_2,t_1)## in ##\mathcal H'## such that ##\Omega_{\rho(t_2)}=U(t_2,t_1)\Omega_{\rho(t_1)}##.

Edit: I should probably add what the inner product on ##\mathcal H'## is: ##\left<\bigoplus_n v_n, \bigoplus_n w_n\right>_{\mathcal H'} = \sum_n\left<v_n,w_n\right>_{\mathcal H}##

As far as I can tell, this corresponds to
(1) A proper mixture and an improper mixture (reduced density matrix) are indistinguishable if one only looks at local observables
(2) Every mixture can be interpreted as a reduced density matrix, and purified.

The physical picture behind this is decoherence. However, decoherence does not derive the collapse. No matter what mathematical tricks one plays, deterministic unitary evolution and the born rule are insufficient, because
(1) There are two types of time evolution: deterministic and probabilistic
(2) The Born rule does not give the joint probabilities for observations carried out at different times

In addition to standard physics texts, rigourous texts like Holevo deal extensively with collapse, and it is mentioned in the rigourous text by Dimock.
Holevo https://www.amazon.com/dp/3540420827/?tag=pfamazon01-20
Dimock https://www.amazon.com/dp/1107005094/?tag=pfamazon01-20
 
Last edited by a moderator:
  • #53
atyy said:
As far as I can tell, this corresponds to
(1) A proper mixture and an improper mixture (reduced density matrix) are indistinguishable if one only looks at local observables
(2) Every mixture can be interpreted as a reduced density matrix, and purified.
No, I'm not restricting the set of observables anywhere. Every observable ##A## on ##\mathcal H## corresponds to an observable ##\pi(A)## on ##\mathcal H'## and every pure and mixed state (proper or improper) on ##\mathcal H## corresponds to a vector state ##\Omega## in ##\mathcal H'##. (##\sum_n \rho_n b_n \left<b_n,\cdot\right>## is sent to ##\bigoplus_n \sqrt{\rho_n} b_n##.)

The physical picture behind this is decoherence. However, decoherence does not derive the collapse. No matter what mathematical tricks one plays, deterministic unitary evolution and the born rule are insufficient, because
(1) There are two types of time evolution: deterministic and probabilistic
It's independent of decoherence. Both types of time evolution can be described by a general Lindblad equation in ##\mathcal H## and this induces a family of time evolution operators ##U(t_2,t_1)## on ##\mathcal H'## as described above. In fact, one doesn't even need a Lindblad equation. It's enough to know the density matrices in ##\mathcal H## at all times to get the family.

(2) The Born rule does not give the joint probabilities for observations carried out at different times
I'm sure one can write down joint probabilities also in ##\mathcal H'##, since ##\mathcal H'## contains exactly the same information as the set of density matrices on ##\mathcal H##. It's only encoded differently (as an infinite direct sum instead of a matrix). It just needs a little bit of extra work to get the correct formulas. Very roughly speaking, I just put the (square roots) of the entries of a density matrix in a list, rather than in a matrix, so if one can use that information to calculate joint probabilities on ##\mathcal H##, one should also be able to do that on ##\mathcal H'##.

In addition to standard physics texts, rigourous texts like Holevo deal extensively with collapse, and it is mentioned in the rigourous text by Dimock.
Holevo https://www.amazon.com/dp/3540420827/?tag=pfamazon01-20
Dimock https://www.amazon.com/dp/1107005094/?tag=pfamazon01-20
Thanks, I will have a look at them.
 
Last edited by a moderator:
  • #54
rubi said:
No, I'm not restricting the set of observables anywhere. Every observable ##A## on ##\mathcal H## corresponds to an observable ##\pi(A)## on ##\mathcal H'## and every pure and mixed state (proper or improper) on ##\mathcal H## corresponds to a vector state ##\Omega## in ##\mathcal H'##. (##\sum_n \rho_n b_n \left<b_n,\cdot\right>## is sent to ##\bigoplus_n \sqrt{\rho_n} b_n##.)

Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".

Is the theorem you are thinking about what is called the GNS construction on this page about the church of the larger Hilbert space: http://www.quantiki.org/wiki/The_Church_of_the_larger_Hilbert_space? If it is, then I do think it is equivalent to purifications, and the two "churches" of quantum theory. The other denomination is of course the church of the smaller Hilbert space: http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf.

rubi said:
It's independent of decoherence. Both types of time evolution can be described by a general Lindblad equation in ##\mathcal H## and this induces a family of time evolution operators ##U(t_2,t_1)## on ##\mathcal H'## as described above. In fact, one doesn't even need a Lindblad equation. It's enough to know the density matrices in ##\mathcal H## at all times to get the family.

rubi said:
I'm sure one can write down joint probabilities also in ##\mathcal H'##, since ##\mathcal H'## contains exactly the same information as the set of density matrices on ##\mathcal H##. It's only encoded differently (as an infinite direct sum instead of a matrix). It just needs a little bit of extra work to get the correct formulas. Very roughly speaking, I just put the (square roots) of the entries of a density matrix in a list, rather than in a matrix, so if one can use that information to calculate joint probabilities on ##\mathcal H##, one should also be able to do that on ##\mathcal H'##.

What I mean is that if I know the density matrices at all time and the Born rule, there is still experimental data that exists that I cannot predict without collapse or some other postulate. For example, if I know the state is f(x) at t1 and g(x) at t2, I can calculate the probability of some being at x=y at t1, and the probability of being at x=z at t2. However, I cannot calculate the probability of being at z at t2 given that I was at y at t1, ie. I cannot calculate p(y,z) or p(z|y).

rubi said:
Thanks, I will have a look at them.

In Holevo's book, it is in the chapter on repeated and continuous measurements, and the concept that is called "collapse" is dealt with by the concept of an instrument. In Dimock's book it is just mentioned, and there is not extensive discussion about it.
 
  • #55
atyy said:
The bell tests need collapse, when the Alice's and Bob's observations are timelike separated or they are calculated in a frame in which the their observations are not simultaneous.
This I don't understand. In Bell tests you start with an entangled pair of photons (biphotons), created by some local process, e.g., the parametric down conversion in a crystal. They are mostly emitted back to back and you simply have to wait long enough to be able to detect the photons at large distances (making sure that nothing disturbs them to prevent the decoherence the state). The single-photon polarizations are maximally random (unpolarized photons) but the 100% correlation between polarizations measured in the same direction are inherent in this state. So it's a property of the biphotons, and the correlations are thus "caused" by their production in this state and not due to the measurement of one of the single-photon polarization states. It doesn't matter, whether the registration events by A and B are time-like or space-like separated. You'll always measure the correlation due to the entanglement provided there was no disturbance in between to destroy the entanglement by interactions with this disturbances. This shows that there's no collapse necessary in the analysis of these experiments (the same holds of course when you use not aligned setups of the two polarizers at A's and B's places, as necessary to demonstrate the violation of Bell's inequality or variations of it).

Collapse causes no trouble to relativistic spacetime, unless one believes in a special relativistic ontology. Einstein causality and locality are beliefs in ontology. Special relativity does not require Einstein causality and locality - that is one of great lessons of collapse.
I don't know, what you precisely mean by "special relativistic ontology". The space-time structure described by the Minkowski space-time is only consistent with the principle of causality, if there cannot be causal influences over space-like distances, and the collapse assumption introduces precisely such a thing, because it states that the bi-photon state instantaneously collapses to a two-photon state, where by letting one of the entangled photons go through a polarization filter at A's causes B's photon to have the corresponding complementary polarization state. This happens instantaneously in the usual collapse assumptions, and clearly violates Einstein causality and thus directly contradicts the very foundations of QED, which is a very well tested theory. So it's much simpler and more natural not to make this unnecessary assumption but take for granted what Born's Rule tells you about the correlations of the entangled biphoton state as detailed above. As I said, I don't see, where you need a collapse to describe a (theoretically) pretty simple experiment via quantum theory.
 
  • #56
atyy said:
Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".
The space ##\mathcal H'## certainly contains more states than ##\mathcal H##, but is that a bad thing? (Even ##\mathcal H## usually contains pathological states that are never physically realized. Think of ##\sum_n \chi_{[n,n+2^{-n}]}\in L^2(\mathbb R)##, which is normalized and doesn't vanish at ##\infty## or some fancy nowhere continuous function or whatever crazy things mathematicians can come up with.) Every physical situation that can be described in ##\mathcal H## can also be described in ##\mathcal H'## and the results are equivalent. Thus it doesn't matter whether I choose to describe my physics in ##\mathcal H## or ##\mathcal H'##. Of course, one would usually choose ##\mathcal H##, since the description is easier there, but this choice is not physically relevant. The point of the example was not to promote the use of ##\mathcal H'##, but rather to explain that whether time evolution is unitary or not is only a matter of how one organizes the available information and not a physical principle. Physics only cares about probability conservation. Whether that happens unitarily or not depends on the way we choose to encode our states. In other words: We cannot "detect" the Hilbert space. It's analogous to the situation in general relativity, where the choice of coordinate system is not relevant either.

Is the theorem you are thinking about what is called the GNS construction on this page about the church of the larger Hilbert space: http://www.quantiki.org/wiki/The_Church_of_the_larger_Hilbert_space? If it is, then I do think it is equivalent to purifications, and the two "churches" of quantum theory. The other denomination is of course the church of the smaller Hilbert space: http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf.
Yes, I was talking about the GNS construction, but my example was not strictly a GNS construction, but equivalent to a GNS construction in many situations. The theorem on that page seems to use tensor products instead of direct sums though, so I don't think it's the same thing and it's not clear whether that also works in infinitely many dimensions. Contrary to what is said on the page, the GNS construction is a much more general result and it only yields Hilbert spaces that are equivalent to tensor products in special situations. Stinesprings theorem seems to be something completely different in general. In the ##\mathcal H'## I gave, one cannot reconstruct ##\rho## by taking a partial trace for example.

What I mean is that if I know the density matrices at all time and the Born rule, there is still experimental data that exists that I cannot predict without collapse or some other postulate. For example, if I know the state is f(x) at t1 and g(x) at t2, I can calculate the probability of some being at x=y at t1, and the probability of being at x=z at t2. However, I cannot calculate the probability of being at z at t2 given that I was at y at t1, ie. I cannot calculate p(y,z) or p(z|y).
Well, if you can do it on ##\mathcal H##, then you can also do it on ##\mathcal H'##. No information gets lost in that transition.

In Holevo's book, it is in the chapter on repeated and continuous measurements, and the concept that is called "collapse" is dealt with by the concept of an instrument. In Dimock's book it is just mentioned, and there is not extensive discussion about it.
Thanks. I can't get hold of that book before monday, though. :H
 
  • #57
atyy said:
Yes, but is it also the case that every observable in ##\mathcal H'## corresponds uniquely to an observable in ##\mathcal H##? If it doesn't, then that would correspond to what physicists call a local observable, since the space ##\mathcal H## is "smaller" or "local" compared to ##\mathcal H'## which is "larger".
I also want to add the following to my previous post: For every state ##\Psi## in ##\mathcal H'## and every ##\epsilon>0##, there is a density matrix ##\rho## in ##\mathcal H## such that for all observables ##A##, the following holds: ##|\mathrm{Tr}_{\mathcal H}(\rho A) - \left<\Psi,A\Psi\right>_{\mathcal H'}|<\epsilon##. That means one can find a density matrix in ##\mathcal H## that has expectation values that are arbitrarily close to the expectation values in ##\mathcal H'##, so ##\mathcal H## and ##\mathcal H'## can't be distinguished physically, even if one uses a state in ##\mathcal H'## that doesn't directly come from a state in ##\mathcal H##. This is the content of Fell's theorem.

Edit: Oops. I just realized that you were asking about observables, not states. Well, observables are not something that one gets out of the mathematics, but rather something one has to put in. Just because I use a different Hilbert space, it doesn't mean that I can magically build more measurement devices than I could before. So the right order is to specify what I can possibly measure (for example position, momentum, variances, correlations, ...) and then build a theory that describes these measurements. I can do that in both ##\mathcal H## and ##\mathcal H'##, but both Hilbert spaces also contain a huge amount of excess operators that don't correspond to physically realizable measurement apparata. And both spaces contain equally many of them (as in cardinality of the set of such operators).

P.S. I'm now reading the paper you quoted.
 
Last edited:
  • #58
rubi said:
Thanks. I can't get hold of that book before monday, though. :H

I'll read your other comments too, will reply later. But before Monday, one can also try

http://arxiv.org/abs/0706.3526 (the "collapse" is defined by postulating an instrument as in Eq 3)
http://arxiv.org/abs/0810.3536 (the "collapse" is again defined by postulating an instrument in section 6.2, Eq 6.7 to Eq 6.12)

What is interesting about the presentation by Heinosaari and Ziman is that the argument leading up to Eq 6.7 almost seems to derive from the Schroedinger equation and the Born rule. However, I think it requires the assumption that if B is measured a little later than A, the same result is obtained as if B and A are measured at the same time. This seems to be some sort of continuity argument, which is how the projection postulate was argued for by Dirac.
 
Last edited:
  • #59
vanhees71 said:
Where do you need a collapse here? I just measure, e.g., a spin component (to have the simple case of a discrete observable) and take notice of the result. If you have a filter measurement (the usually discussed Stern-Geralch apparati are such), I filter out all partial beams I don't want and am left with a polarized beam with in the spin state I want. That's all. I don't need a collapse. The absorption of the unwanted partial beams are due to local interactions of the particles with the absorber. There's no collapse!

Well, in an EPR setup, with experimenters Alice and Bob, Alice performs a measurement on one particle. She gets a result. You can explain that in terms of filters. But, immediately after the measurement, she can compute the probabilities for Bob's result. Now, that can't be due to filtering. She isn't filtering Bob's particles.
 
  • #60
No it's due to the fact that A knows that the photons are entangled and thus what B must measure at his photon. Again: The result for Bob's photon is not caused by Alice's measurement. The preparation was before when the biphoton was created by some process (parametric down conversion).
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
Replies
52
Views
6K
  • · Replies 92 ·
4
Replies
92
Views
8K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 376 ·
13
Replies
376
Views
21K
Replies
35
Views
762
  • · Replies 92 ·
4
Replies
92
Views
14K
  • · Replies 105 ·
4
Replies
105
Views
8K
Replies
2
Views
2K
Replies
9
Views
3K