A Effective Dynamics of Open Quantum Systems: Stochastic vs Unitary Models

  • #51
Demystifier said:
it's trivial to make an appropriate generalization. Instead of eigenstates |k⟩ with eigenvalues k one can introduce eigenstates |k,l⟩ with eigenvalues k, and in subsequent equations replace label k with label k,l when appropriate.
A. Neumaier said:
No, because the decomposition is no longer unique. Thus your subsequent argument depends on which decomposition you choose.
Demystifier said:
The final result does not depend on it. It is easy to show it, so I leave it as an exercise for you.
In the case of a spin, ##l## is the position of the measured particle, hence a continuous index. Therefore (3) cannot be valid after your suggested change. Otherwise it would be valid in the limit where the position parts of ##k_1## and ##k_2## tend to each other, which contradicts (4).

Thus the proposed exercise is ill-conceived.
 
Physics news on Phys.org
  • #52
A. Neumaier said:
Since Demystifier copped out, could you please point to a paper or bookchapter you worked through, where the case of measuring spin (certainly the most important special case) is treated? Or if this example wasn't needed to make you ''believe at the non-rigourous level that BM solves the measurement problem'', how did you convince yourself of the latter?

At the non-rigourous level, I was happy with a non-degenerate case, because I think I can always add a term that breaks the degeneracy by an arbitarily small amount so that it is mathematically non-degenerate, but the difference is physically undetecable.
 
  • #53
atyy said:
At the non-rigourous level, I was happy with a non-degenerate case, because I think I can always add a term that breaks the degeneracy by an arbitarily small amount so that it is mathematically non-degenerate, but the difference is physically undetecable.
But Born's rule in its standard discrete form is robust only under deformations that preserve the discreteness of the spectrum. This cannot resolve the infinite degeneracy of a spin component operator in a single spinning particle.

How would you perturb ##A=\sigma_3## by a tiny amount to get an operator with a nondegenerate spectrum? I don't know of any reasonable perturbation that achieves this. For example, ##A+\epsilon x_3## is still doubly degenerate at each generalized eigenvalue.

Even if you find one, the perturbed version would have to be treated with Born's rule in its continuous form, which doesn't collapse the wave function to a nonexistent normalizable eigenstate.

Thus your perturbation argument is far from convincing!
 
  • #54
A. Neumaier said:
But Born's rule in its standard discrete form is robust only under deformations that preserve the discreteness of the spectrum. This cannot resolve the infinite degeneracy of a spin component operator in a single spinning particle.

How would you perturb ##A=\sigma_3## by a tiny amount to get an operator with a nondegenerate spectrum? I don't know of any reasonable perturbation that achieves this. Even if you find one, the perturbed version would have to be treated with Born's rule in its continuous form, which doesn't collapse the wave function to a nonexistent normalizable eigenstate. Thus your perturbation argument is far from convincing!

You can use the Born rule in continuous form, and collapse it to a normalizable state.

I admit I cannot readily construct a suitable perturbation in all cases.

But just to make sure I understand you - your concern is that eg. with spin, the simplest Coulomb potential hydrogen atom treatment has degeneracy, in the sense that spin up and spin down wave functions can have the same energy?
 
  • #55
atyy said:
You can use the Born rule in continuous form, and collapse it to a normalizable state.
How do you do that? The typical reasoning is that one cannot measure the continuous spectrum exactly but only approximately. Thus one splits the continuum into a discrete union of intervals, each representing an uncertain measurement, and applies the corresponding projectors to get the collapse. But with the finite resolution, each of these projectors has again an infinite degeneracy! Thus one cannot maintain both reduction to a normalizable state and nondegeneracy.
atyy said:
But just to make sure I understand you - your concern is that eg. with spin, the simplest Coulomb potential hydrogen atom treatment has degeneracy, in the sense that spin up and spin down wave functions can have the same energy?
Of course. Most discrete energy eigenstates of the nonrelativistic hydrogen electron are highly degenerate. This is the reason why one gets a fine splitting in the relativistic treatment, and (since some degeneracy still persists) a hyperfine splitting (= Lamb shift) in the QED treatment. The continuous spectrum of the hydrogen electron remains degenerate even in the QED version.

For a multiparticle system that (unlike the hydrogen atom) can dissociate into more than two pieces, part of the continuous spectrum is even infinitely degenerate!
 
  • #56
A. Neumaier said:
How do you do that? The typical reasoning is that one doesn't measure the continuous spectrum exactly but only approximately. Thus one splits the continuum into a discrete union of intervals, each representing an uncertain measurement, and apply the correponding projectors to get the collapse. But with the finite resolution, each of these projectors has again an infinite degeneracy! Thus you cannot maintain both reduction to a normalizable state and nondegeneracy.

That's an issue that is interesting in its own right, apart from discussions about various interpretations of quantum mechanics.

Let's take position as the most familiar example of a continuous observable. Any scheme for measuring position has a limitation in accuracy. So suppose you are using a procedure that only determines position to an accuracy of \pm \delta x. Then in a sense, you're not measuring position, but some related observable \hat{X} that returns a discrete set of possible results: n \delta x where n is an integer. What then, is the complete set of eigenstates of this operator? (Realistically, there is a distinction between returning a fuzzy result with accuracy \pm \delta x and returning the precise result that is n \delta x. But I'm going to use the latter, because it's easier to analyze mathematically.)

Well, the answer is that a complete set of eigenstates would be of the form \psi_{n,m} where:
  • \psi_{n,m}(x) = 0 if x < n \delta x
  • \psi_{n,m}(x) = sin(\frac{m \pi x}{\delta x}) if n \delta x < x < (n+1) \delta x
  • \psi_{n,m}(x) = 0 if x > (n+1) \delta x
So the index m in \psi_{n,m} is this infinite degeneracy that you're talking about. However, when \delta x is very small, then the expectation value for energy increases rapidly with increasing m so for practical purposes, can't we assume that only the m=1 eigenstate is relevant?
 
  • #57
stevendaryl said:
Let's take position as the most familiar example of a continuous observable. Any scheme for measuring position has a limitation in accuracy. So suppose you are using a procedure that only determines position to an accuracy of \pm \delta x. Then [...] a complete set of eigenstates would be of the form \psi_{n,m} where:
  • \psi_{n,m}(x) = 0 if x < n \delta x
  • \psi_{n,m}(x) = sin(\frac{m \pi x}{\delta x}) if n \delta x < x < (n+1) \delta x
  • \psi_{n,m}(x) = 0 if x > (n+1) \delta x
So the index m in \psi_{n,m} is this infinite degeneracy that you're talking about. However, when \delta x is very small, then the expectation value for energy increases rapidly with increasing m so for practical purposes, can't we assume that only the m=1 eigenstate is relevant?
No, because even the first eigenstate in each interval will already have a very high energy, so should be ignorable by the same argument. But one cannot ignore all basis states! This shows that the energy argument is faulty.

What counts is the energy of the superposition, not of a basis state itself. This energy should be small; an example is the state with ##\psi(x)=x## for ##x\in[0,1]## and zero otherwise. But its projection to any measurable interval inside ##[0,1]## has lots of non-negligible Fourier components with high energy. The very slow convergence of the Fourier series plays havoc with any calculations done subsequent to the approximation.
 
  • #58
A. Neumaier said:
How do you do that? The typical reasoning is that one cannot measure the continuous spectrum exactly but only approximately. Thus one splits the continuum into a discrete union of intervals, each representing an uncertain measurement, and applies the corresponding projectors to get the collapse. But with the finite resolution, each of these projectors has again an infinite degeneracy! Thus one cannot maintain both reduction to a normalizable state and nondegeneracy.

One allows a continuous spectrum to be measured exactly. Then a collapse rule suitable for a continuous variable is given in Eq (3) and (4) of http://arxiv.org/abs/0706.3526.

A. Neumaier said:
Of course. Most discrete energy eigenstates of the nonrelativistic hydrogen electron are highly degenerate. This is the reason why one gets a fine splitting in the relativistic treatment, and (since some degeneracy still persists) a hyperfine splitting (= Lamb shift) in the QED treatment. The continuous spectrum of the hydrogen electron remains degenerate even in the QED version.

For a multiparticle system that (unlike the hydrogen atom) can dissociate into more than two pieces, part of the continuous spectrum is even infinitely degenerate!

I have never worked through this, but googling came up with an attempt in the spirit of my thinking: https://www.ma.utexas.edu/mp_arc/c/12/12-59.pdf

For an attempt to rigourously show that BM reproduces non-relativistic QM (I believe they treat degenerate eigenvalues also), try:
http://arxiv.org/abs/quant-ph/0308039
http://arxiv.org/abs/quant-ph/0308038
 
  • #59
A. Neumaier said:
In the case of a spin, l is the position of the measured particle, hence a continuous index.
Someone much more clever than me had a perfect response to that:
"No, no, you're not thinking; you're just being logical."
Niels Bohr


Let me give you a hint (but again not all the details). I did not say that ##|k,l\rangle## are a complete basis. They are just states that in a given experimental setup can be distinguished.

Applying Born rule is like cooking. Either you understand the general principles or you ask for a precise recipe for each possible case.
 
Last edited:
  • #60
Demystifier said:
They are just states that in a given experimental setup can be distinguished.
If it is only finitely many (as in any real experimental setup) it doesn't resolve the infinite degeneracy.
 
  • #61
Demystifier said:
Someone much more clever than me had a perfect response to that:
"No, no, you're not thinking; you're just being logical."
Niels Bohr

I'm not going to venture an opinion about whether this quote is appropriate in the current thread, but I do like the sentiment. There are certain types of counterarguments that sound rational, but can actually be applied endlessly, in every situation, and so in practice often end up being just mud to hurl at your (philosophical) opponents. You can always complain that your opponent is using terms that haven't been given precise enough definitions. You can always complain that your opponent's argument has missing steps, and so is not logically valid. You can always complain that your opponent has insufficient empirical data to justify his conclusions (or that the empirical data has multiple interpretations, only some of which support his conclusions). You can always complain that your opponent's claim that something is impossible only shows a lack of imagination. I could probably put together a toolbox of counterarguments that can be used (with some tweaking) to attack any claim or argument, whatsoever.
 
  • Like
Likes Demystifier
  • #62
atyy said:
For an attempt to rigourously show that BM reproduces non-relativistic QM (I believe they treat degenerate eigenvalues also), try:
http://arxiv.org/abs/quant-ph/0308039
http://arxiv.org/abs/quant-ph/0308038
The first one was suggested by Demystifier in post #30, and I commented on it in post #49. The second is indeed about measurement in the POVM version, which I agree is the simplest form for discussing collapse. But I didn't see how the POVM formula is derived from the Bohmian dynamics; p.40 seems to contain only formulas from quantum mechanics that are free of the Bohmian dynamics
 
  • #63
A. Neumaier said:
If it is only finitely many (as in any real experimental setup) it doesn't resolve the infinite degeneracy.
That is true, but I don't see it as a problem. No real experiment can't resolve the infinite degeneracy, and Bohmian mechanics only claims that it can explain the results of real experiments. Bohmian mechanics does not claim that it can explain a more general mathematical Born rule that cannot be directly tested by real experiments.
 
  • #64
Demystifier said:
That is true, but I don't see it as a problem. No real experiment can't resolve the infinite degeneracy
The point is that the recipe to handle degeneracy that you left as a trivial exercise fails if degeneracy is left in the measured operators. To cope with degeneracy (which is necessarily present when you resolve a continuous spectrum only to finite resolution) you need to improve the argument justifying your fundamental theorem!
 
  • #65
A. Neumaier said:
But I found unphysical arguments that affect whatever is done: In going from (5.12) to (5.14) it is claimed that if the support of the initial state is a union of two disjoint regions, this remains so in the future ''for a substantial amount of time''. But for laboratory distances, these times are typically extremely short, of the order of the time one of the light particles involved needs to cross the lab. Thus the effective wave functions (which they later simply call wave functions - see bottom of p.29) are not at all guaranteed to exist only for a substantial amount of time, although the authors claim on p.29 that ''the qualifications under which we have established (5.22) are so mild that in practice they exclude almost nothing''.
You would be right if ##\Psi## was a one-particle wave function. But it is really a wave function describing a very large number of particles, because it includes all the particles of the apparatus. Therefore ##\Psi## does not live in the 3-dimensional space but in a highly dimensional configuration space. In such a big-dimensional space the regions really remain disjoint for a very long time.
 
  • #66
Demystifier said:
In such a big-dimensional space the regions really remain disjoint for a very long time.
Why? Is there somewhere an estimate of the times for a reasonably realistic model system?
 
  • #67
A. Neumaier said:
The point is that the recipe to handle degeneracy that you left as a trivial exercise fails if degeneracy is left in the measured operators. To cope with degeneracy (which is necessarily present when you resolve a continuous spectrum only to finite resolution) you need to improve the argument justifying your fundamental theorem!
All these nitpicking details can relatively easily be done.
 
  • #68
Demystifier said:
All these nitpicking details can relatively easily be done.
How? That something can easily be done is much easier to say than to verify! Your first hint didn't work since it either lead to a continuous index or didn't resolve the degeneracy. Thus I don't trust your intuition without seeing the improved argument.
 
  • #69
A. Neumaier said:
Why? Is there somewhere an estimate of the times for a reasonably realistic model system?
For simplicity, suppose that wave packet of one particle takes 1/10 of the total volume in the laboratory. Then two such wave packets will typically often collide with each other.

But if one-particle wave packet takes 1/10 of the total volume, then ##N##-particle wave packet takes ##(1/10)^N## of the total configuration-space volume. For ##N=10^{23}## this is an incredibly small number. It should be clear that two such small objects will very rarely collide. Try to estimate typical times by yourself.
 
  • #70
A. Neumaier said:
How? That something can easily be done is much easier to say than to verify! Your first hint didn't work since it either lead to a continuous index or didn't resolve the degeneracy. Thus I don't trust your intuition without seeing the improved argument.
You don't motivate me take an effort to explain the details. When I explain some details to you, you never say "Ah, thanks, now I understand that. Could you please explain one more thing to me?". Instead, you merely jump to another question without showing any sign that my previous explanations were at least partially successful. That is not motivating.
 
  • #71
Demystifier said:
For simplicity, suppose that wave packet of one particle takes 1/10 of the total volume in the laboratory. Then two such wave packets will typically often collide with each other.

But if one-particle wave packet takes 1/10 of the total volume, then ##N##-particle wave packet takes ##(1/10)^N## of the total configuration-space volume. For ##N=10^{23}## this is an incredibly small number. It should be clear that two such small objects will very rarely collide. Try to estimate typical times by yourself.
But the diameter of the wave packets increases linearly with time. Therefore the volumes occupied grow like the Nth power of time, and is quickly very large.
 
  • #72
A. Neumaier said:
But the diameter of the wave packets increases linearly with time.
Only for free particles. Not, for example, for particles constituting a lattice in a solid-state crystal.
 
  • #73
Demystifier said:
You don't motivate me take an effort to explain the details. When I explain some details to you, you never say "Ah, thanks, now I understand that. Could you please explain one more thing to me?". Instead, you merely jump to another question without showing any sign that my previous explanations were at least partially successful. That is not motivating.
This is because your explanations were so far not successful. Success means understanding the complete argument. Debugging an incomplete proof is like debugging a program. One needs many small insights before one gets it right; until then one asks the computer one questions after the other to find out the missing information.

Jumping to another question is the sign that I had digested the information provided and went on to the next step. I am asking questions for understanding, not just for fun - I have far more interesting things to do than wasting my time putting someone down.
 
  • #74
A. Neumaier said:
Debugging an incomplete proof is like debugging a program.
I like that analogy. But usually a person who easily finds bugs can also easily fix the bugs by himself. It is confusing that you are so good in the former but not in the latter.
 
  • #75
Demystifier said:
Only for free particles. Not, for example, for particles constituting a lattice in a solid-state crystal.
But it is the universal wave function, hence consists of all particles in the universe. For simplicity take the photons to be massive with unobservably small mass. They move essentially freely; even in matter they move in a kind of quantum Brownian motion with drift, and their support grows at least like the square root of the time. The particles bound in a crystal may be ignored since their volume remains approximately in place, hence factors out, and the movable electrons and photons blow up the remaining factor.
 
  • #76
Demystifier said:
I like that analogy. But usually a person who easily finds bugs can also easily fix the bugs by himself. It is confusing that you are so good in the former but not in the latter.
I can easily fix bugs in my programs and those of my students, but not in those of others. For a bug in a foreign package I usually ask the author or supporter of the package.

But due to lack of sufficient support, debugging the proof of your theorem has already become too time consuming for me. This thread was not supposed to be about Bohmian mechanics anyway. So I'll quit discussing this subtopic.
 
  • #77
A. Neumaier said:
The first one was suggested by Demystifier in post #30, and I commented on it in post #49.

I'll defer to Demystifier on this. But on this point, my thinking is that although there is a difficulty, it is not particular to BM. What is being assumed is that decoherence works as we expect it to in the measurement process. One is simply assuming that the von Neumann-Zurek picture of measurement does work. If that were to fail, then our ability to shift the classical/quantum cut to include more and more of the universe would fail, and Copenhagen would fail.
 
  • #78
atyy said:
I'll defer to Demystifier on this. But on this point, my thinking is that although there is a difficulty, it is not particular to BM. What is being assumed is that decoherence works as we expect it to in the measurement process. One is simply assuming that the von Neumann-Zurek picture of measurement does work. If that were to fail, then our ability to shift the classical/quantum cut to include more and more of the universe would fail, and Copenhagen would fail.
But this is not quite the same. Decoherence acknowledges that no matter where you place the cut you need to take account of the interaction with the remainder of the universe. Whereas the argument in the paper commented on in $49 states without good reason that one can ignore the interaction with the remainder of the universe.
 
  • #79
A. Neumaier said:
But this is not quite the same. Decoherence acknowledges that no matter where you place the cut you need to take account of the interaction with the remainder of the universe. Whereas the argument in the paper commented on in $49 states without good reason that one can ignore the interaction with the remainder of the universe.

If you look at Zurek's papers, you'll find he also ignores the rest of the universe. He brings in just enough of [system + apparatus + environment] which evolves unitarily to show that decoherence works.
 
  • #80
A. Neumaier said:
No, not throughout. The paper is a survey paper and describes many approaches, including approaches freely using collapse.

But Plenio and Knight also describe a derivation by Gardiner (1988) that starts from the unitary evolution and does not use collapse: The description of this derivation begins on p.31. Formula (78) contains the Hamiltonian of the complete system. The collapse is avoided by the following technical trick:

which is then carried out using the quantum Ito calculus.

An equivalent but far less technical derivation was later given in the paper
H. P. Breuer, F. Petruccione, Stochastic dynamics of reduced wave functions and continuous measurement in quantum optics, Fortschritte der Physik 45, 39-78 (1997).
In particular, pp.53-58 of this paper describe a fairly elementary derivation of a quantum jump process responsible for photodetection, starting with the unitary dynamics and involving no collapse but only standard approximations from statistical mechanics.

The quantum jump processes for general measurement situations are derived from unitarity in the more technical papers [30-32] by Breuer and Petruccione cited in the paper mentioned above. All four papers can be downloaded from http://omnibus.uni-freiburg.de/~breuer/

The Breuer and Petruccione paper, Stochastic dynamics of reduced wave functions and continuous measurement in quantum optics, Fortschritte der Physik does not deal with selective measurements. So in Copenhagen one does not need collapse in this case either.

Another paper by Breuer and Petruccione http://arxiv.org/abs/quant-ph/0302047 (Fig. 1) explains the difference between selective and non-selective measurements. For selective measurements, Breuer and Petruccione use the standard formalism and invoke collapse.
 
  • #81
A. Neumaier said:
But due to lack of sufficient support, debugging the proof of your theorem has already become too time consuming for me. This thread was not supposed to be about Bohmian mechanics anyway. So I'll quit discussing this subtopic.
I agree. :smile:
 
  • #82
Suppose you want to explain someone how to come form point A to point B in a big city. How to do that?

If you want to explain it to a human, that's easy. Just take a map of the city and draw the line corresponding to the path from point A to point B. For a human, that's enough.

But if you want to explain it to a robot, that's not enough. To the robot you must give explicit instructions how to avoid various obstacles such as cars, walkers, trash cans or even cats on the street. For that purpose you must write a complex computer program and debug all the bugs. If you miss any detail of how to avoid a simple obstacle, the robot will stop and say: "It is not possible to come from point A to point B." So it's very hard to explain it to the robot. It can be done, but it's hard.

The experience of explaining physics to some people on this thread looks to me like experience of explaining the path from point A to point B to a robot. :biggrin:
 
Last edited:
  • Like
Likes vanhees71
  • #83
atyy said:
The Breuer and Petruccione paper, Stochastic dynamics of reduced wave functions and continuous measurement in quantum optics, Fortschritte der Physik does not deal with selective measurements. So in Copenhagen one does not need collapse in this case either.

Another paper by Breuer and Petruccione http://arxiv.org/abs/quant-ph/0302047 (Fig. 1) explains the difference between selective and non-selective measurements. For selective measurements, Breuer and Petruccione use the standard formalism and invoke collapse.
Your content description is incorrect. The first paper does deal with selective measurements, as described in the second paper.

The second paper is an overview on how to model an open quantum system without explictly taking the detector into account (except qualitatively in the choice of the reduced model). The bottom half of Figure 1 is about selective measurement, and bottom half left is the reducd description by a Markov process in Hilbert space, which gives the piecewise deterministic process = PDP = quantum jump process discussed in post #1. The language used in the second paper is on three levels. On the highest level, between (12) and (13), the system is described in traditional Copenhagen language, using the projection postulate amounting to collapse. In the paragraph containing (17), the system is described on the second level in an alternative ensemble language, where instead of projection one talks about a subensemble conditioned on a specific outcome. This corresponds to the minimal statistical interpretation, framed as a stochastic description in terms of classical conditional probabilities for the process describing the stochastic measurement results (so that the notion of conditioning makes sense). Finally, in the paragraph containing (22), the system is described on the third level as a classical stochastic piecewise determinstic (drift and jump) process for the wave function in which the jumps depend stochastically on the measurement results. This is the quantum jump process discussed in post #1. The arguments in this section serve to demonstate that the three descriptions are in some sense equivalent, though the higher the level the more precise the description. In paticular, on the third level, the complete (reduced) quantum measurement process is fully described by the classical PDP, and hence has a fully classical ontology.

Completely lacking in the second paper is any discussion how the reduced description described is related to a complete microscopic picture of the detection process including a bath responsible for the dissipation. The latter is the central square in Figure 1. It is only remarked in passing - before (7) and middle of p.9 - that it can be done by neglecting memory effects. How it is done is neither stated nor referenced, since the goal of the paper is very different - namely to introduce the central physical concepts and techniques for open quantum systems - i.e., systems in an already reduced description.

This gap is filled, however, in the papers cited in post #28. There one starts with a unitary dynamics only and uses the standard approximation tools from statistical physics to derive the quantum jump process. In particular, the first paper by Breuer and Petruccione derives for a few practically relevant examples from unitarity the PDP in exactly the form discussed in the second paper.
A. Neumaier said:
In particular, pp.53-58 of this paper describe a fairly elementary derivation of a quantum jump process responsible for photodetection, starting with the unitary dynamics and involving no collapse but only standard approximations from statistical mechanics.
The other three papers mentioned there derive the PDP in a much more general (and much more abstract) framework.

The two papers together therefore demonstrate that selective measurement in QM with collapse upon each measurement of an observable with a discrete spectrum is derivable from unitary quantum mechanics under the conventional approximations made in statistical mechanics.
 
Last edited:
  • Like
Likes Dilatino
  • #84
A. Neumaier said:
Your content description is incorrect. The first paper does deal with selective measurements, as described in the second paper.

OK, yes, I see the first paper does do selective measurements.

A. Neumaier said:
The second paper is an overview on how to model an open quantum system without explictly taking the detector into account (except qualitatively in the choice of the reduced model). The bottom half of Figure 1 is about selective measurement, and bottom half left is the reducd description by a Markov process in Hilbert space, which gives the piecewise deterministic process = PDP = quantum jump process discussed in post #1. The language used in the second paper is on three levels. On the highest level, between (12) and (13), the system is described in traditional Copenhagen language, using the projection postulate amounting to collapse. In the paragraph containing (17), the system is described on the second level in an alternative ensemble language, where instead of projection one talks about a subensemble conditioned on a specific outcome. This corresponds to the minimal statistical interpretation, framed as a stochastic description in terms of classical conditional probabilities for the process describing the stochastic measurement results (so that the notion of conditioning makes sense). Finally, in the paragraph containing (22), the system is described on the third level as a classical stochastic piecewise determinstic (drift and jump) process for the wave function in which the jumps depend stochastically on the measurement results. This is the quantum jump process discussed in post #1. The arguments in this section serve to demonstate that the three descriptions are in some sense equivalent, though the higher the level the more precise the description. In paticular, on the third level, the complete (reduced) quantum measurement process is fully described by the classical PDP, and hence has a fully classical ontology.

Completely lacking in the second paper is any discussion how the reduced description described is related to a complete microscopic picture of the detection process including a bath responsible for the dissipation. The latter is the central square in Figure 1. It is only remarked in passing - before (7) and middle of p.9 - that it can be done by neglecting memory effects. How it is done is neither stated nor referenced, since the goal of the paper is very different - namely to introduce the central physical concepts and techniques for open quantum systems - i.e., systems in an alrady reduced description.

This gap is filled, however, in the papers cited in post #28. There one starts with a unitary dynamics only and uses the standard approximation tools from statistical physics to derive the quantum jump process. In particular, the first paper by Breuer and Petruccione derives for a few practically relevant examples from unitarity the PDP in exactly the form discussed in the second paper.

In the first paper by Breuer and Petruccione, they still assume collapse. On p49 of their 1997 Fortschritte der Physik paper they state "This interpretation is necessary because each application of the Chapman-Kolmogorov equation implies a state reduction fixed by the measurement scheme."

A. Neumaier said:
The other three papers mentioned there derive the PDP in a much more general (and much more abstract) framework.

The two papers together therefore demonstrate that selective measurement in QM with collapse upon each measurement of an observable with a discrete spectrum is derivable from unitary quantum mechanics under the conventional approximations made in statistical mechanics.

OK, I'll look at the other three papers. But the first one still assumes collapse via the Chapman-Kolmogorov equation.
 
  • #85
atyy said:
On p49 of their 1997 Fortschritte der Physik paper they state "This interpretation is necessary because each application of the Chapman-Kolmogorov equation implies a state reduction fixed by the measurement scheme."
But the derivation on pp.53-58 to which I referred does not refer to collapse and is completely independent of the considerations on p.49. The latter considarations only serve to relate his summary of the general, more abstract case from [30-32] to the conventional measurement discussion.

But for the special cases explicitly treated later, the measurement scheme is completely described by the total Hamiltonian, and no collapse assumption enters anywhere. The wave function dynamics of the total unitary system is treated as a completely classical dynamical system, and reduced to a classical stochastic equation in Hilbert space in the same way as one would proceed for any other classical dynamical system. Thus there is no room for a collapse assumption.

Instead, the remark on p.49 just amounts to an interpretation of the final result: Each application of the Chapman-Kolmogorov equation (derived directly from unitarity) implies a state reduction fixed by the measurement scheme. Hence it proves that collapse is derivable from unitarity.
 
  • #87
A. Neumaier said:
But the derivation on pp.53-58 to which I referred does not refer to collapse and is completely independent of the considerations on p.49. The latter considarations only serve to relate his summary of the general, more abstract case from [30-32] to the conventional measurement discussion.

But for the special cases explicitly treated later, the measurement scheme is completely described by the total Hamiltonian, and no collapse assumption enters anywhere. The wave function dynamics of the total unitary system is treated as a completely classical dynamical system, and reduced to a classical stochastic equation in Hilbert space in the same way as one would proceed for any other classical dynamical system. Thus there is no room for a collapse assumption.

Instead, the remark on p.49 just amounts to an interpretation of the final result: Each application of the Chapman-Kolmogorov equation (derived directly from unitarity) implies a state reduction fixed by the measurement scheme. Hence it proves that collapse is derivable from unitarity.

On p55, they write "Proceeding as in Sec. 2 one is led to expression (23)".

In Section 2, p45, just after Eq 17, they write "According to the theory of quantum measurement such a resolution corresponds to a complete, orthogonal measurement [38] of the environment."

Ref [38] is Braginsky and Khalili, which assumes state reduction as a postulate.
 
  • #88
atyy said:
On p55, they write "Proceeding as in Sec. 2 one is led to expression (23)".

In Section 2, p45, just after Eq 17, they write "According to the theory of quantum measurement such a resolution corresponds to a complete, orthogonal measurement [38] of the environment."

Ref [38] is Braginsky and Khalili, which assumes state reduction as a postulate.
''corresponds to'' is not an assumption but a translation of the formulas into the Copenhagen interpretation language. If you just look at the chain of equations comprising the true arguments you'll see that the arguments make nowhere use of this interpretation language. Thus the words just serve to guide the intuition of readers well-acquainted with the collapse language and its meaning. One could as well give first the complete formal argument without the interpretational comments and then comment afterwards about what is means in terms of the collapse picture. Indeed, this is done in [30], which is:
H. P. Breuer & F. Petruccione,
Stochastic dynamics of open quantum systems: Derivation of the differential Chapman-Kolmogorov equation,
Physical Review E51, 4041-4054 (1995).
Everything is done from scratch in terms of a classical stochastic process in the projective space associated with system+detector. Since only classical probabilities are used it is impossible for quantum mechanical collapse to enter the argument. But at the end one gets the PDP. Only after everything has been done, the PDP is interpreted in terms of quantum jumps.
 
  • #89
A. Neumaier said:
''corresponds to'' is not an assumption but a translation of the formulas into the Copenhagen interpretation language. If you just look at the chain of equations comprising the true arguments you'll see that the arguments make nowhere use of this interpretation language. Thus the words just serve to guide the intuition of readers well-acquainted with the collapse language and its meaning. One could as well give first the complete formal argument without the interpretational comments and then comment afterwards about what is means in terms of the collapse picture. Indeed, this is done in [30], which is:
H. P. Breuer & F. Petruccione,
Stochastic dynamics of open quantum systems: Derivation of the differential Chapman-Kolmogorov equation,
Physical Review E51, 4041-4054 (1995).
Everything is done from scratch in terms of a classical stochastic process in the projective space associated with system+detector. Since only classical probabilities are used it is impossible for quantum mechanical collapse to enter the argument. But at the end one gets the PDP. Only after everything has been done, the PDP is interpreted in terms of quantum jumps.
[/PLAIN]
Stochastic dynamics of open quantum systems: Derivation of the differential Chapman-Kolmogorov equation


1. Does the construction in section II.B (beginning after Eq 33) hold for systems that are not statistically independent?

2. Is Eq 43 dependent on the choice of basis in Eq 42?
 
Last edited by a moderator:
  • #90
atyy said:
Stochastic dynamics of open quantum systems: Derivation of the differential Chapman-Kolmogorov equation

1. Does the construction in section II.B (beginning after Eq 33) hold for systems that are not statistically independent?

2. Is Eq 43 dependent on the choice of basis in Eq 42?
1. No. This subsection just explains why the tensor product gives the correct description of two independent systems, and that the reduction formula (44) recovers the description of the subsystem exactly. This cannot be true if there are interactions between the two systems; the latter is the case treated in Part III.

2. Possibly yes. He doesn't assert basis independence, and it isn't obviously true. So it seems that in the noninteracting case there are many possible reduced dynamics. This freedom is restricted in the interacting case since the argument in Part III depends on the fact (66) that the basis there is an eigenbasis of ##H_2##.
 
Last edited:
  • #91
A. Neumaier said:
Finally, in the paragraph containing (22), the system is described on the third level as a classical stochastic piecewise determinstic (drift and jump) process for the wave function in which the jumps depend stochastically on the measurement results. This is the quantum jump process discussed in post #1. The arguments in this section serve to demonstate that the three descriptions are in some sense equivalent, though the higher the level the more precise the description. In paticular, on the third level, the complete (reduced) quantum measurement process is fully described by the classical PDP, and hence has a fully classical ontology.

Thanks for the replies above, I read those too. I'm going back here to your comment on their other paper, the overview http://arxiv.org/abs/quant-ph/0302047. In their discussion around Eq 22, they do say:

"Physically, ##\psi(t)## represents the state of the reduced system which is conditioned on a specific readout of the measurement carried out on the environment. Consequently, the stochastic evolution depends on the measurement scheme used to monitor the environment."

So if that section applies to their derivation of the Chapman-Kolmogorov equation in http://omnibus.uni-freiburg.de/~breuer/paper/p4041.pdf, then I would expect the measurement of the environment somehow enters one of the assumptions they make, though at this point I am not sure where.
 
  • #92
atyy said:
Thanks for the replies above, I read those too. I'm going back here to your comment on their other paper, the overview http://arxiv.org/abs/quant-ph/0302047. In their discussion around Eq 22, they do say:

"Physically, ##\psi(t)## represents the state of the reduced system which is conditioned on a specific readout of the measurement carried out on the environment. Consequently, the stochastic evolution depends on the measurement scheme used to monitor the environment."

So if that section applies to their derivation of the Chapman-Kolmogorov equation in http://omnibus.uni-freiburg.de/~breuer/paper/p4041.pdf, then I would expect the measurement of the environment somehow enters one of the assumptions they make, though at this point I am not sure where.
It is in the dynamics of the detector, which must include enough of the environment to produce irreversible results (and hence determines what is read out). B & P model the latter by assuming separated time scales and the validity of the Markov approximation - which hold only if the detector is big enough to be dissipative. (The latter is typically achieved by including in the detector a heat bath consisting of an infinite number of harmonic oscillators.) Since B & P make these assumptions without deriving them, their analysis holds for general dissipative detectors. But of course for any concrete application one must check (as always in statistical mechanics) that these assumptions are plausible.

In sufficiently idealized settings, these assumptions can actually proved rigorously, but this is beyond the scope of the treatment by B & P. Rigorous results (without the discussion of selecive measurement but probably sufficient to establish the assumptions used by B & P) were first derived by Davies 1974 and later papers with the same title. See also the detailed survey:
H. Spohn, Kinetic equations from Hamiltonian dynamics: Markovian limits. Reviews of Modern Physics, 52 (1980), 569.

In the cases treated by B & P, the discrete PDP process corresponds to photodetection, which measures particle number, which has a discrete spectrum; the diffusion processes correspond to homodyne or heterodyne detection, which measure quadratures, which have a continuous spectrum. B & P obtain the latter from the PDP by a limiting process in the spirit of the traditional approach treating a continuous spectrum as a limit of a discrete spectrum.
 
  • #93
A. Neumaier said:
It is in the dynamics of the detector, which must include enough of the environment to produce irreversible results (and hence determines what is read out). B & P model the latter by assuming separated time scales and the validity of the Markov approximation - which hold only if the detector is big enough to be dissipative. (The latter is typically achieved by including in the detector a heat bath consisting of an infinite number of harmonic oscillators.) Since B & P make these assumptions without deriving them, their analysis holds for general dissipative detectors. But of course for any concrete application one must check (as always in statistical mechanics) that these assumptions are plausible.

In sufficiently idealized settings, these assumptions can actually proved rigorously, but this is beyond the scope of the treatment by B & P. Rigorous results (without the discussion of selecive measurement but probably sufficient to establish the assumptions used by B & P) were first derived by Davies 1974 and later papers with the same title. See also the detailed survey:
H. Spohn, Kinetic equations from Hamiltonian dynamics: Markovian limits. Reviews of Modern Physics, 52 (1980), 569.

In the cases treated by B & P, the discrete PDP process corresponds to photodetection, which measures particle number, which has a discrete spectrum; the diffusion processes correspond to homodyne or heterodyne detection, which measure quadratures, which have a continuous spectrum. B & P obtain the latter from the PDP by a limiting process in the spirit of the traditional approach treating a continuous spectrum as a limit of a discrete spectrum.

So it seems the collapse assumption comes with the Markovian assumption.

In these treatments, the measurement problem is not solved, because unitary evolution alone has no observable outcome (such as a particle position). If we are using the collapse to say when the particle acquires a position, then it is the Markov approximation which causes collapse which determines when a detection is made - which is not satisfactory since it doesn't seem reasonable for an approximation to cause reality.
 
  • #94
atyy said:
So it seems the collapse assumption comes with the Markovian assumption.

In these treatments, the measurement problem is not solved, because unitary evolution alone has no observable outcome (such as a particle position). If we are using the collapse to say when the particle acquires a position, then it is the Markov approximation which causes collapse which determines when a detection is made - which is not satisfactory since it doesn't seem reasonable for an approximation to cause reality.
The Markov assumption is used also in classical statistical mechanics to derive hydromechanics or the Boltzmann equation. Thus you seem to propose that classical statistical mechanics is not satisfactory, too. This is a defendable position. But at least the arguments show that to go from unitarity to definite (i.e., irreversible) outcomes in Hamiltonian quantum mechanics one doesn't need to assume more than to go from reversibility to irreversibility in Hamiltonian classical mechanics.

Moreover, I had given references that prove the Markov assumption in the low coupling infinite volume limit. Thus it is sometimes derivable and not an assumption. Your criticism that it is an approximation only is moot since for pointer readings it suffices to have approximately definite outcomes, and these are guaranteed by statistical mechanics for macroscopic observables (with an accuracy of ##N^{-1/2}## where ##N## is of the order of ##10^{23}## or more).
 
Last edited:
  • Like
Likes Mentz114
  • #95
A. Neumaier said:
The Markov assumption is used also in classical statistical mechanics to derive hydromechanics or the Boltzmann equation. Thus you seem to propose that classical statistical mechanics is not satisfactory, too. This is a defendable position. But at least the arguments show that to go from unitarity to definite (i.e., irreversible) outcomes in Hamiltonian quantum mechanics one doesn't need to assume more than to go from reversibility to irreversibility in Hamiltonian classical mechanics.

Moreover, I had given references that prove the Markov assumption in the low coupling infinite volume limit. Thus it is sometimes derivable and not an assumption. Your criticism that it is an approximation only is moot since for pointer readings it suffices to have approximately definite outcomes, and these are guaranteed by statistical mechanics for macroscopic observables (with an accuracy of ##N^{-1/2}## where ##N## is of the order of ##10^{23}## or more).

It isn't the same. In classical statistical mechanics, a particle has a definite outcome (eg. position) at all times. This is not true in quantum mechanics. It is not sufficient to have approximately definite outcomes.
 
  • #96
atyy said:
it is the Markov approximation which causes collapse which determines when a detection is made - which is not satisfactory since it doesn't seem reasonable for an approximation to cause reality.
This is not a cause as in causality but only a cause in the sense of explanation. Thus your claim amounts to: ''it is the Markov approximation which explains collapse which determines when a detection is made - which is not satisfactory since it doesn't seem reasonable for an approximation to explain reality", and here the second half of the sentence is no longer reasonable. Everywhere in physics we explain reality by making approximations. This is the only way we can explain anything at all!
 
  • #97
atyy said:
It is not sufficient to have approximately definite outcomes.
Why not? One cannot read a pointer very accurately.
 
  • #98
A. Neumaier said:
Why not? One cannot read a pointer very accurately.

In classical mechanics there is an underlying sharp reality (eg. Newtonian mechanics). Then our inability to read the reality accurately is taken care of by coarse graning and probability. The coarse graning does not cause reality to appear. Reality exists before the coarse graning is done.

In contrast, in quantum mechanics, the sharp reality of a unitarily evolving quantum state is not enough, because it does not specify eg. position or whatever definite measurement outcome is seen. The measurement outcome is part of reality, so it seems that the wave function does not specify all of reality. Consequently, if collapse appears by coarse graning, then the coarse graning is causing reality to appear, which is quite different from classical mechanics.
 
  • #99
atyy said:
In classical mechanics there is an underlying sharp reality (eg. Newtonian mechanics). Then our inability to read the reality accurately is taken care of by coarse graning and probability. The coarse graning does not cause reality to appear. Reality exists before the coarse graning is done.

In contrast, in quantum mechanics, the sharp reality of a unitarily evolving quantum state is not enough, because it does not specify eg. position or whatever definite measurement outcome is seen. The measurement outcome is part of reality, so it seems that the wave function does not specify all of reality. Consequently, if collapse appears by coarse graining, then the coarse graning is causing reality to appear, which is quite different from classical mechanics.
Just as in classical mechanics, only the Markov property is assumed. The jump process follows - hence collapse.

Nothing causes reality to appear - reality is, and was before anyone dreamt of quantum mechanics. Whatever is done in the paper is done on paper only - therefore explaining things, not causing anything! Coarse graining explains collapse, and hence explains why QM matches observed reality.

Similarly: In classical mechanics the underlying reality is strictly conservative. There is no dissipation of energy, though the latter characterizes reality. To have dissipation, one must postulate an additional friction axiom that is the classical analogue of the collapse. However, friction is found to arise from the Markov approximation. Thus in your words, classical coarse graining is causing friction to appear - which is not satisfactory since it doesn't seem reasonable for an approximation to cause the reality of friction. In my words, understanding that friction comes from coarse graining is as big an insight as that collapse comes from coarse graining. In both cases, it bridges the difference in the dynamics of an isolated system and that on an open system. The explanation by coarse graining is in both cases fully quantitative and consistent with experiment, hence has all the features a good scientific explanation should have.
 
  • #100
A. Neumaier said:
Just as in classical mechanics, only the Markov property is assumed. The jump process follows - hence collapse.

Nothing causes reality to appear - reality is, and was before anyone dreamt of quantum mechanics. Whatever is done in the paper is done on paper only - therefore explaining things, not causing anything! Coarse graining explains collapse, and hence explains why QM matches observed reality.

Similarly: In classical mechanics the underlying reality is strictly conservative. There is no dissipation of energy, though the latter characterizes reality. To have dissipation, one must postulate an additional friction axiom that is the classical analogue of the collapse. However, friction is found to arise from the Markov approximation. Thus in your words, classical coarse graining is causing friction to appear - which is not satisfactory since it doesn't seem reasonable for an approximation to cause the reality of friction. In my words, understanding that friction comes from coarse graining is as big an insight as that collapse comes from coarse graining. In both cases, it bridges the difference in the dynamics of an isolated system and that on an open system. The explanation by coarse graining is in both cases fully quantitative and consistent with experiment, hence has all the features a good scientific explanation should have.

Don't focus on collapse. Focus on the measurement outcome, which needs no collapse. If one has a unitarily evolving wave function, at what point in time does the particle acquire a position?

It is different from classical physics where the particle has a position, before any coarse graining that makes friction appear.
 

Similar threads

Back
Top