Decoherence does not collapse wavefunc.

In summary, decoherence does not directly collapse the wavefunction but rather explains the appearance of wavefunction collapse through the leakage of quantum information into the environment. This process is thermodynamically irreversible and can be reversible in principle, but not in practice. Decoherence is not equivalent to wavefunction collapse and is often seen as an escape mechanism for those who are uncomfortable with the idea of true collapse.
  • #36
Fuly linear quantum mechanics does not allow for collapse. A mild non-linearity allows you to have collapses.
In fact the description of an individual system that involves a wave function may be nonlinear, but the evolution of statistical ensembles of systems, described by a density matrix, may still be linear.
 
Physics news on Phys.org
  • #37
StevieTNZ said:
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?

As you see there are "interpretational differences". If you hold that "collapse" is a conceptual process then it is meaningless to say "it occurs" but rather one says the theorist "collapses" his description upon new information (my position).

But taking the other side for arguments sake, Quantum mechanics describes the evolution of the system between measurements (or post preparation or pre destructive detection) via unitary operators. The unitarity conserves probability (or in the relativistic setting probability current). The problem with describing a collapse is (whether it be "real" or not) there is in the language of it an assumed update of assumptions from what we can predict for the outcome of a measurement vs what we know when we assume a specific measured value. Even if "collapse has been realized" we will still, until integrating that assumption describe the system as via the equivalent of a density operator. In this setting "collapse" is represented by decoherence. There is a change in the entropy of the representation. This implies a non-unitary (though still linear?) evolution of the system itself during the measurement process.

Basic QM doesn't describe the evolution during measurement, only between measurements and thus doesn't say anything about linearity vs non-linearity of the measurement process nor about the reality of collapse vs virtuality of collapse. It says that after measurement we can update our wave-function to represent the known measured value. If we don't know it but it is still recorded somewhere, we can use a "classically" probabilistic description (density operator) until we access the record.

Now theorists trying to push the envelope have considered non-linear perturbations of QM to see if they can "explain" collapse or measurement. From my position (which is pretty close to the orthodox CI interpretation) this is not an issue. The distinction between classical and quantum physics is one of fundamental format of description. One does not "explain" a change of description. One can express classical physics in the same format as QM and one gets the same "collapses" when one integrates new measurement values. In so doing one sees collapse as being non-physical.
 
  • #38
Zarqon said:
When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:

Consider that I start with the state |+>. If I measure in the |+>,|-> basis I would now find the state |+> with 100% probability. Let's now consider what happens if you did a measurement in the |0>,|1> basis without telling me. You would "collapse" the state to one of them, let's just say it happened to be |1>.

Now, without you telling me anything, i.e. my knowledge about the system does not change, I now have a non-zero probability of measuring |-> (50%) if I again measure in my basis. The probability of measuring |-> has thus changed without my knowledge being changed at all.

I can only interpret this as the fact that the physical state has actually changed, which is completely different from any classical analogy, where no amount of information update can ever change the location of neither keys nor glasses.

Yes this is quite correct. Measurement is a physical act and it will sometimes change the physical system. If you think of it being in a state then you must say the state has changed (provided it was not already in an eigen-state).

Take your example again and let me tell you what I did physically w.r.t. placing an intervening measuring device but not tell you the recorded outcome and you will get the correct probabilistic predictions of outcomes for your subsequent measurements if we repeat the process over and over to see the relative frequencies.

You will use a density matrix to describe my act of measurement without knowing the measured outcomes. It will give you the same probabilities for your subsequent measurements as you observe.

Now If we take the cases where I measured values, (supposing the predictions of your subsequent measurement were not 50%-50%.) I could make more precise predictions of the probabilities of outcomes for those subsequent to |1> measurements, and likewise for those subsequent to |0> measurements. I will in short see a correlation between your subsequent measurements and my hidden 1 vs 0 records. This means I can better predict individual outcomes than you since I have more information, however it does not invalidate the probability distribution you see. My sharp |1> vs |0> is no less nor more valid than your rho at predicting given the information we individually have available.

I further assert there is no foundational difference between how the kets and the density operators are used. They are neither of them "states" and both representations of probable behavior.

You still didn't address my reversed application of the "mode vectors" ("state vectors" as you'd say). The example shows the time symmetry of the QM and the appropriate time reversed parsing of the experimental predictions and it shows the "kets" changing to different "states" between a given pair of measurements purely because we are reversing our conditional probabilities. It shows to my mind the kets are not referring to states of the system but rather to states of our knowledge about the system.
 
  • #39
jambaugh said:
Even if "collapse has been realized" we will still, until integrating that assumption describe the system as via the equivalent of a density operator. In this setting "collapse" is represented by decoherence. There is a change in the entropy of the representation. This implies a non-unitary (though still linear?) evolution of the system itself during the measurement process.

If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.

This completely describes the evolution of a single quantum system under a continuous monitoring.

Yet, if we are not interested in a single quantum system, but care only about averages over an infinite ensemble of similarly prepared systems, only then, if we wish, we do the averaging and get the perfect linear Liouville master equation for the density matrix.

In short:

Single systems are described by collapsing wave functions, ensembles are described by non-collapsing, continuous in time, linear master equation for the density matrix. That's all.
 
  • #41
Another question on decoherence: Take stat mechanics. There we have an atomic system and an environment(consisting of atomic subsystems that will be traced over) plus an interaction between them, V(t). The atomic system could be for example an atom and the environment could reprsent collisions with other atoms or particles and the result of the interaction would be a broadening and shifting of its levels, e.g. creating a finite lifetime for the atomic states.
To get a finite lifetime one ABSOLUTELY NEEDS to trace over.

Now take decoherence. We again have an atomic system and an environment. The collapse we get is again triggered by the interaction and again the many degrees of freedom(=huge number of "environmental" particles). What is not so clear is the "tracing over" mechanism in decoherence. What are we tracing over?
 
  • #42
arkajad said:
If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.
You're speaking of a particle track in a cloud chamber. We can describe that track as a sequence of position measurements and indeed speak of the idealized limit of continuous measurement. But the reality is that the track is a discrete sequence of position measurements. This has nothing to say as to the discussions. Yes we can measure the position of a quantum. Yes we can measure it twice, three times, 10^14 times.

Now this non-linearity which you assert. Where is that required and to what level are you applying it? If you want to describe a quantum system with position observable, observed every 10^-5 seconds or so. You have 1 description for the future measurements sans incorporation of the intermediate measurements. You get for that last measurement a nice classical probability distribution. You get for adjacent measurements nice conditional probabilities which incorporate the dynamics and the uncertainties of momenta.

You update your description by inputting say the first or say the 108th position measurement value and you get a different description because you input more information. The description has "collapsed". Input more actualized values and you collapse it more. Eventually you have something which looks very close to a classical particle trajectory but it is still an expression of where you saw bubbles i.e. records of measurements. You still express the measurement within the linear algebra over the Hilbert space. There is no need for nor empirical evidence supporting the introduction of non-linearity in the dynamics at the level of the operator algebra. We already have the non-linearity of the positional dependence we see in all mechanics.

This completely describes the evolution of a single quantum system under a continuous monitoring.

But of course. The description is that of a sequence of measured values (of position). That is all we ever see, measurements. This is why I harp on the fact that assertions of "what goes on" between measurement are meaningless. Rather we can predict outcomes of measurement and evolve our prediction based on known dynamics. The dynamically evolving wave-function (or equivalent density op) are mathematical representations of that array of predictions.

Yet, if we are not interested in a single quantum system, but care only about averages over an infinite ensemble of similarly prepared systems, only then, if we wish, we do the averaging and get the perfect linear Liouville master equation for the density matrix.

So you declare. But why ignore the equivalence of representation, even for single quantum systems? Why are you so opposed to using the mathematical tools best equipped to express both the quality and degree of knowledge we have about how a single system will behave in subsequent measurements?

Here is our fundamental difference. You acknowledge that the density operator is a probabilistic description and thus expresses behavior of an ensemble. Let me use a different word, class instead of ensemble. We should be general enough to not presuppose the prior objective existence of the "ensemble" but rather allow "on the fly" instantiation of members. I can speak of the probability of outcomes of a single die throw because I can instantiate an arbitrary number of throws of that single die. There is no fixed number of outcomes and so I speak of the class of throws and not the set or ensemble.

(For other readers let me refresh memories with the definition: A class is a collection of things defined by common attributes as opposed to sets which are defined purely in terms of membership. i.e. sets must have their elements defined prior to the set definition while classes are defined by the criterion by which an instance is identified as being a member of that class. Thus we cannot prior to measurement say an given electron is an element of the set of electrons with spin z up. After measurement we have used the property of its spin to define its membership in the class of electrons whose spin has been measured as up. The act of measurement and value defines the class and defines the electron as an instance of it.)

Now getting back to quantum theory. How can you define a probability for a single quantum? It will either be measured with one value or another, not an ensemble of values so one cannot speak of the probability of a single quantum's behavior as an intrinsic property of that one reality. Similarly we cannot observer say an interference pattern for a single quantum. It just goes "blip" leaving a single position record. Rather one speaks of the class of equivalent quanta and the frequency of outcomes for that class which we can repeatedly instantiate by virtue of a source of such quanta to which we may affix a symbol [tex]\psi_0[/tex]. The "ket" or Hilbert space vector or wave-function from which we calculate various transition probabilities or measurement probabilities is a symbol attached to a source of individual quanta. The wave-function is as much an representation of an "ensemble" as is a density operator. The interference pattern of the wave-function like the probability of any outcome can only be confirmed by an ensemble of experiments, not a single instance.

This I assert is the only interpretation consistent with operational usage. That the wave-function and density operator both, are the quantum mechanical analogue of a classical probability distribution.

In short:

Single systems are described by collapsing wave functions, ensembles are described by non-collapsing, continuous in time, linear master equation for the density matrix. That's all.

In short, single systems are prepared in such a way that we know they are members of a class of systems which we represent by a wave-function. Under measurement, given the fact that an act of measurement is a physical interaction, we update the class of system to which we assign the single system being described. Sometimes with less than maximal information the most accurate available class description is not a wave-function but a density operator. That is all.

Now my description is less assertive than yours, do you agree? We both agree we can speak of a class of systems "with the same wave-function" right?

If you can bring yourself to acknowledge that it is possible, and useful to sometimes... upon occasion, speak of a class of quantum systems with the same set of values for a given complete observable, and hence the same wave-function, then can you explain to me, other for personal spiritual reasons, how you can say this is ever not the case?
 
  • #43
jambaugh said:
If you want to describe a quantum system with position observable, observed every 10^-5 seconds or so. [/tex]
In a cloud chamber it is not you who decides how often the the records are being made. It is decided by the coupling. The timing is random is part of the random process.

You update your description by inputting say the first or say the 108th position measurement value and you get a different description because you input more information. The description has "collapsed". Input more actualized values and you collapse it more.

I am not imputing anything. All is done through the coupling. What I do is - at the end I may have a look at the track.

Eventually you have something which looks very close to a classical particle trajectory but it is still an expression of where you saw bubbles i.e. records of measurements. You still express the measurement within the linear algebra over the Hilbert space. There is no need for nor empirical evidence supporting the introduction of non-linearity in the dynamics at the level of the operator algebra.

Try to accomplish the above with a linear process and show me the result.


[QUOTE}Now getting back to quantum theory. How can you define a probability for a single quantum?

I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed usin statistical procedures applied to your photographs each with 10000 dots.
Is there a probability involved? Somehow is, but it is hidden.
The same when you compare two tracks in an external field. They are not the same. And yet they have similar "features". For instance the average distance between dots, approximately the same curvature, when you average etc. Is probability involved? Somehow is, but it is hidden in the application of statistics to finite samples.

If you can bring yourself to acknowledge that it is possible, and useful to sometimes... upon occasion, speak of a class of quantum systems with the same set of values for a given complete observable, and hence the same wave-function, then can you explain to me, other for personal spiritual reasons, how you can say this is ever not the case?

I prefer down to Earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.

I am stressing the importance of timing - which is usually dynamical and not by "instantaneous measurement at chosen time" from the textbooks. Textbooks do not know how to deal with the dynamical timing - which a standard in the labs.
 
  • #44
arkajad said:
I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed using statistical procedures applied to your photographs each with 10000 dots.
Is there a probability involved? Somehow is, but it is hidden.
The same when you compare two tracks in an external field. They are not the same. And yet they have similar "features". For instance the average distance between dots, approximately the same curvature, when you average etc. Is probability involved? Somehow is, but it is hidden in the application of statistics to finite samples.
Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.

I prefer down to Earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.
The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.

I am stressing the importance of timing - which is usually dynamical and not by "instantaneous measurement at chosen time" from the textbooks. Textbooks do not know how to deal with the dynamical timing - which a standard in the labs.

The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.

The representation of measurement goes beyond "instantaneous" as I pointed out in the (logically) reversed representation of an experiment. I'll repeat in more detail:

Consider a single experimental setup. A quantum is produced from a random source, a sequence of measurements are made, A then B then C, (which take time and room on the lab's optical bench or whatever) and then final system detector registers the system to assure a valid experiment. If you like you can consider intermediate dynamics as well but for now let's keep it simple.

What does theory tell us about the sequence of measurements? Firstly there is randomness in outcomes. Secondly there is correlation of measured values. How are they correlated? QM says...

[tex] Prob(B=b |A=a) = |\langle b|a\rangle |^2=Tr(\rho_b \rho_a)[/tex]
[tex] Prob(C=c |B=b) = |\langle c|b\rangle |^2=Tr(\rho_c \rho_b)[/tex]
But then only if the measurements are complete i.e. non-degenerate (unless you're using density operators in which case everything works fine.)

We can reverse the conditional probabilities:
[tex]Prob(A=a|B=b) = |\langle a|b\rangle |^2=|\langle b|a\rangle |^2[/tex]
et cetera.

When we point to the lab bench at the region between measuring device A and measuring device B we might say "state [tex]|a\rangle[/tex]" but that's just saying that over at measuring device A we registered "a" and so that is the condition on subsequent measurements (whether they be B, or C or D). We can similarly point to that same region and say "state [tex] \langle b|[/tex]" but we'd mean that a subsequent measurement "b" is made and so this is the condition on prior measurements (be they A, or C or D). Whether we are "forward tracking" or "back tracking" the causal correlation between measurements, we are expressing these correlations via the "bras" and "kets", not modeling a physical state of the system, and we can only confirm we are using the correct ones by carrying out many measurements thus they represent at best classes of systems.

e.g. [tex]|a\rangle[/tex] is the class of systems for which A has been measured with value 'a'.

Now the business with collapse is a matter of transitioning from the point where we make a measurement and acknowledge a particular value that was measured. You can say it this way:

"We consider an ensemble of systems for which A was observed and consider the subset for which A=a" Here we "collapse" to the subset of a fixed ensemble.

Or we can speak in the singular.
"We consider a single quantum for which A is observed, and then..." wait for it ... "we consider the case of an actual measured value of A=a." Now we know the quantum is, for the purposes of subsequent measurements, in the class of those for which a measured value A=a has occurred.

We collapse the class to which we assign the single quantum for our purposes of making subsequent predictions. The collapse is not itself a physical act it is a conceptual step we make corresponding to the physical act of measurement. That measurement may be delayed, may take a very short time or may take an arbitrarily long time. The details are unimportant to the conceptual process of incorporating that information (or as is more typical considering a hypothetical possibility.)

Your humble stochastic simulations are fine research --I am sure-- but please refer to the physical processes by their rightful name, "interaction", not "collapse".
 
  • #45
jambaugh said:
Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.

I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.

The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.

I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.

The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.

The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.

The representation of measurement goes beyond "instantaneous" as I pointed out in the (logically) reversed representation of an experiment. I'll repeat in more detail:

Consider a single experimental setup. A quantum is produced from a random source, a sequence of measurements are made, A then B then C, (which take time and room on the lab's optical bench or whatever) and then final system detector registers the system to assure a valid experiment. If you like you can consider intermediate dynamics as well but for now let's keep it simple.

What does theory tell us about the sequence of measurements?

It tells us absolutely nothing about the timing. You are consistently neglecting this issues.

Your humble stochastic simulations are fine research --I am sure-- but please refer to the physical processes by their rightful name, "interaction", not "collapse".

In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".
 
  • #46
arkajad said:
I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.
Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.
I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.
I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems. There and only there can you compare with experiment. You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality". It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".

The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.
Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons). Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.

I think you misuse the term "collapse" where you should be speaking of "decoherence" which is the physical process (of external random physical variables i.e. "noise" being introduced into the physical system.)

It tells us absolutely nothing about the timing. You are consistently neglecting this issues.
And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase. I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then let's but in a different thread as that is quite a topic in itself.


In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".
"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.

I think your prejudice against DO's (describing a single system) is what is keeping you from understanding this fully. The dynamics of the coupling of system to episystem can be expressed via a hamiltonian on the composite system + episys. and then tracing over the "epi" part yields a non-sharp and "decohering" system description...but again only expressible as a density operator.

Again I submit when you speak of a "wave function valued random variable" (which it seems to me you are using) you are effectively describing a density operator.

Consider a random distribution of Hilbert space vectors with corresponding probabilities:
[tex] \{(\psi_1,p_1),(\psi_2,p_2),\cdots\}[/tex]
it is equivalently realized as a density operator:
[tex]\rho = \sum_k p_k \rho_k[/tex]
where
[tex]\rho_k = \psi_k\otimes\psi^\dagger_k.[/tex]

That IS what the density operator represents pragmatically and within the modern literature. Yes when we speak of a (random) ensemble of systems we must use density operators but that isn't the D.O.'s definition. A probability can be associated with a single system in that it expresses our knowledge about that system in the format of: to what class of systems that one belongs. In expressing this we understand the definition of the value of a probability comes from the class not from the singular system. A D.O. is a probability distribution over a set of Hilbert space vectors e.g. wave-functions.
 
  • #47
jambaugh said:
Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.

I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems.

It matches more than that. It matches in also the fact that in real words probabilities are calculated out of the counting and averaging of characteristics of single events and not out of the calculating of integrals. Those who neglect that fact are deliberately blind to a part of the reality. They say: "we need just tools for calculating numbers". Well, that's their choice.

You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality".

There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".

It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".

QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.

Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons).

Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?

Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.

They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.

I think you misuse the term "collapse" where you should be speaking of "decoherence" which is the physical process (of external random physical variables i.e. "noise" being introduced into the physical system.)

"Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.

And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase.

It is not a physical phrase. "Timing of the event" is such. But they alsways come in pairs.

I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then let's but in a different thread as that is quite a topic in itself.

Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.

"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.

You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?

I think your prejudice against DO's (describing a single system) is what is keeping you from understanding this fully. The dynamics of the coupling of system to episystem can be expressed via a hamiltonian on the composite system + episys. and then tracing over the "epi" part yields a non-sharp and "decohering" system description...but again only expressible as a density operator.

It is not so much my prejudice. It's my conscious choice.

Again I submit when you speak of a "wave function valued random variable" (which it seems to me you are using) you are effectively describing a density operator.

Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.

Consider a random distribution of Hilbert space vectors with corresponding probabilities:
[tex] \{(\psi_1,p_1),(\psi_2,p_2),\cdots\}[/tex]
it is equivalently realized as a density operator:
[tex]\rho = \sum_k p_k \rho_k[/tex]
where
[tex]\rho_k = \psi_k\otimes\psi^\dagger_k.[/tex]
This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.

That IS what the density operator represents pragmatically and within the modern literature. Yes when we speak of a (random) ensemble of systems we must use density operators but that isn't the D.O.'s definition.

No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.

A probability can be associated with a single system in that it expresses our knowledge about that system in the format of: to what class of systems that one belongs. In expressing this we understand the definition of the value of a probability comes from the class not from the singular system. A D.O. is a probability distribution over a set of Hilbert space vectors e.g. wave-functions.

Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.
 
  • #48
Pardon the long delay in reply, I've been tied up with the holidays and family...
arkajad said:
...
There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".
Then you see no distinction between belief in voodoo and belief in atoms. There is so much wrong with this statement I don't know where to begin.
QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.
Resistant or not, what you can calculate doesn't validate the identification of your calculus with "reality", especially when there exists multiple methods of calculation. Reality is not the mathematics it is the empirical assumptions which cannot be ignored. I can ignore your stochastic processes without any loss in the fidelity of the predictions of QM.
Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?
So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)
They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.
OMG you are a Platonist? No wonder...
You say "Platonic part of the world" I say "on paper". Are we just arguing semantics or do you actually believe there is a real universe of mathematical forms?

BTW we could calculate orbits prior to the development of differential calculus. We simply extended into the future the epicycle series matching prior observations. Of course the differential calculus is superior as it relates the behavior to e.g. the masses of the bodies and thus eliminating the infinite series of variables which must be determined empirically...

and yet again, when you speak of "the time of the collapse" as if you can observe physical collapse, I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.

"Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.
I placed some of these terms in quotes, because they were common usage synonyms for the sharper ones. But YES "Random variable" has a specific sharp meaning, the symbol representing outcomes of a class of empirical events, specifically outcomes to which we can assign probabilities. And "External" has a perfectly well defined operational meaning. We can isolate a system from external effects without changing the system itself (as a class, i.e. defined by its spectrum of observables and degrees of freedom).

What is more important "external" and "noise" have distinct operational meanings. You can "externally" inject "noise" into a system and see the effect. What meaning is there for "collapse" except as a calculation procedure?

Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.
Very good. That's progress. Now then you agree there is a "collapse on paper" but you seem to be saying there is also a "collapse in reality" which the paper process is representing. Correct?

You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?
"anything better" is a value judgment. Let us establish the value judgment within which we work as physicists. I say "there can't be anything better" specifically in the context of the prediction of physical outcomes of experiments and observables. By what value system do you claim something that is "better"?
It is not so much my prejudice. It's my conscious choice.
A prejudice may or may not be a conscious choice. The point is that it is an a priori judgment. Revisit it, and ask instead what is the justification for that judgment. I know a man who consciously ignores the evidence of evolution because it might undermine his faith in the literal "truth" of the bible. Are you doing the same w.r.t. density operators?

I keep bringing these up because, like using differential equations instead of epicycles, they provide more insight into what is mathematically necessary to predict physical events. What is excised by their use vs wave functions, must not necessarily be a component of physical "reality". Most importantly one finds there is no distinction between a "quantum probability" vs a "classical probability" and so no distinction in the interpetation of their "collapse (on paper)". (which recall was the reason I brought them up to begin with.)

Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.
Yes you have more components to play with (like with epicycles you have more variables to tweak). The important point is that with the DO's you have less yet no loss of predictive information. Thus the "more" you refer to is not linked or linkable to any empirical phenomena. Does it then still have physical meaning in your considered opinion?

This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.
Again see my point above... what utility does this procedure have if it does not change what one can empirically predict? (I do not deny it might have some utility but I call your attention to the nature of that utility if it does manifest.)
No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.
Yes you can do what you like as a person but are you then doing physics or astrology? To express the maximal known information about a system in terms of usage common to the physics community you really really should use density operators as they are understood in that community.
Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.
Then you are on a speculative quest. That is fine and good. But acknowledge that you speculate instead of declaring the orthodox to be "wrong". When you find that mechanism and can justify the superiority of believing the reality of it then come back.

Let me recall for you the thousands of amateur "theoriests" which post on the various blogs and forums about how "Einstein is wrong because I can predict what he predicts by invoking an aether". They justify their noisy insistent proclamations by saying they're "seeking a mechanism to explain"... an explanation is always in terms of other phenomena and when someone seeks to explain in terms of fundamentally unobservable phenomena there is no merit in it.

Yes I am a positivist when it comes to physics. Pure deduction can only link between prepositions, it cannot generate knowledge on its own. However too many times we find implicit hidden axioms in the logic of arguments about nature. Under further scrutiny we find those implicit axioms are chosen out of wish fulfillment to justify the desired conclusions. The only way to avoid this is to adhere to a positivistic discipline, stick to terms which either have operational meaning or explicitly mathematical meaning.

If one does not grant "reality status" to wave function in the form e.g. of Bhomian pilot waves then there is no need to explain collapse, it is explained already in the paper version in a simple trivially obvious way.

The chain of explanation must stop somewhere. it isn't "http://en.wikipedia.org/wiki/Turtles_all_the_way_down" ". I see that quantum mechanics is as it is because it is the limit of our ability to explain in terms of more fundamental empirical phenomena. As the mathematician must eventually stop the chain of formal definition at the point of fundamental undefined terms so too the physicist must stop the chain of explanation at the point of fundamental unexplained phenomena. At that level physics must remain positivistic.
 
Last edited by a moderator:
  • #49
jambaugh said:
So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)

You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.

The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.

So, your war is misdirected.
 
Last edited:
  • #50
arkajad said:
You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.

The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.

So, your war is misdirected.

What you say here is correct w.r.t. calculations yielding observable predictions. But the validity of a calculation does not imply the reality of the mathematical objects or processes used.

Specifically the calculation step "collapse the wave-function" does not, just by virtue of giving correct empirical predictions, imply there is a physical collapse occurring. It is thus incorrect to speak of "when the collapse occurs" or "cause of a collapse" as if it were physical.

That is what I have been consistently attacking and the issue you keep sidestepping.

There is a distinct physical process of decoherence which one can express easily in the density operator language (which you resist accepting) which is not the same as collapse and indeed shows that classical and quantum collapse are indistinguishable. (Classical collapse being the baysian updating of probabilities given subsequent observations.)
 
  • #51
jambaugh said:
There is a distinct physical process of decoherence which one can express easily in the density operator language (which you resist accepting) which is not the same as collapse and indeed shows that classical and quantum collapse are indistinguishable. (Classical collapse being the baysian updating of probabilities given subsequent observations.)

There are distinct physical events which one can express easily in the stochastic processes' language.

I did not see one event in finite time for an individual quantum system derived from the decoherence formalism. But if you show me one - I may even change the team.

BTW. Collapse is NOT bayesian updating probabilities. It is a sudden change of the wave function. Probabilities is a related (in a not so simple way)- but not the same - business. Moreover, they are not bayesian. At least not those that I am talking about.
 
Last edited:
  • #52
arkajad said:
There are distinct physical events which one can express easily in the stochastic processes' language.

I did not see one event in finite time for an individual quantum system derived from the decoherence formalism. But if you show me one - I may even change the team.

BTW. Collapse is NOT bayesian updating probabilities. It is a sudden change of the wave function. Probabilities is a related (in a not so simple way)- but not the same - business. Moreover, they are not bayesian. At least not those that I am talking about.

A baysian updating of probabilities is a "sudden change in the probabilities". (If you'll please let me invoke density operators...)

You can express the collapse either before or after you express the decoherence aspect of the measurement process. (There is no need to pay attention to the decoherence if you simply want to incorporate the new information of the measured value). But in detail, when you measure say X you are coupling the system via its X observable to a measuring device which itself is coupled to an entropy dump.

In the process the X observable becomes correlated to the recording variable (meter) of the measuring device while the whole system + meter decoheres. One is amplifying the quantum variable and like any amplifier you must have energy (to move the meter) and a heat sink (to make the meter settle down into the recorded position).

In so far as the description of the measured system goes you have a density operator diagonal in the eigen-basis of X and correlated to the density operator describing the meter. This meter description is highly separated and you can treat it as a classical system at this point. You have a series of probabilities for each correlated X value of system and "x" record on the meter.

Now you "collapse" by looking at the meter and updating the now classical probability distribution over the eigen-basis of X. This step is simply the same Baysian updating of the classical probabilities of the outcomes of the X measurement from what it was to certainty that a specific x value was measured. It is qualitatively no different from updating the expectation value of a lotto ticket after you read the results of the Sunday drawing in the morning paper.

Now there is a great deal of variability in the decoherence stage, you can entangle then decohere, entangle then measure the entangled partner. The uncertainty principle reflects the necessary correlation of observables non-commuting with X to variables in the measuring device which necessarily get coupled into the entropy dump. E.g. to settle down a needle on a literal meter you need to dissipate its momentum into a heat sink (friction and coil resistance). But once the system in question interacts with the auxiliary meter system the original system when considered by itself has effectively decohered as reflected in its reduced density op.

If you've further interest in the matter I'll see if I can cook up a detailed description of a particular act of measurement. (It's something I need to do anyway.)
 
  • #53
arkajad said:
You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.

The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.

So, your war is misdirected.

a lot of physics professionals with papers in respected and prestigious journals.

arkajad said:
BTW. Collapse is NOT bayesian updating probabilities. It is a sudden change of the wave function. Probabilities is a related (in a not so simple way)- but not the same - business. Moreover, they are not bayesian. At least not those that I am talking about.

i agree.
 
  • #54
jambaugh said:
If you've further interest in the matter I'll see if I can cook up a detailed description of a particular act of measurement. (It's something I need to do anyway.)

It's not been done so far? So unimportant? Amazing!
 
  • #55
ZPower said:
So decoherence does not collapse the wavefunction?

The relation is more complex.

Decoherence explains the dynamical decay of off-diagonal entries in a density matrix rho, thus reducing a nondiagonal density matrix (e.g., one corresponding to a pure state psi via rho = psi psi^*) to a diagonal one, usually one with all diagonal elements occupied. In particular, this turns pure states into a mixture.

On the other hand, the collapse turns a pure state psi into another pure state, obtained by projecting psi to the eigenspace corresponding to a measurement result. In terms of density matrices, and assuming that the eigenspace is 1-dimensional, a collapse turns a density matrix rho into a diagonal matrix with a single diagonal entry. This is not explained at all by decoherence.

A thorough discussion is given in Schlosshauer's survey article
http://lanl.arxiv.org/abs/quant-ph/0312059

See also Chapters A4 and A5 in my theoretical physics FAQ at
http://arnold-neumaier.at/physfaq/physics-faq.html#decoherence
in particular the entry ''Does decoherence solve the measurement problem?''
 
  • #56
arkajad said:
It's not been done so far? So unimportant? Amazing!

Can someone comment on these debates between jambaugh and arkajad?

Jambaugh is bonafide copenhagen who believes that the wave function doesn't actually represent the properties of the system, but is just a part of a mathematical formalism that can be used to calculate probabilities of possible results of experiments, as someone put it. While arkajad is the other side where something may be occurring physically and so make valid the Interpretations like Many Worlds, Bohmian Mechanics. Jambaugh (I think) believes that Interpretations are not even possible at all by ontology. The following illustrates the point.

In the double slit, an emitter send a buckyball composed of 60 carbon atoms in pure state. We can measure interference if the system is not perturbed by external noise. Now Jambaugh believes what occurs between the emission and detection is completely indetermine, in fact, nothing happens physically. While arkajad believes the particle can either still exist as in Many Worlds, Bohmian or even in Cramer's Transductional Interpretation.

The implication if Jambaugh were right that nothing physically happens in between emission and detection is that it is possible reality only consists of measurements and what happens before measurement is physicality doesn't even exist. This means it is possible we are living inside some kind of computer simulation? In such event, only the output (measurement) is important, what goes on between and behind it is complex programming codes and execution. So between emission and detection in the double slit. Reality is being processed in some kind of computer codes. This means even if we can detect billions of galaxies. It doesn't mean they were even there, our measurements detect the photons coming from them. And since nothing happens in between, the photons don't have to travel or even exist... we can say that in the program, there is a subroutine to make it appear that photons coming from the alleged cosmos is being detected. Imagine the subroutine in the program that says to surround and shower the Earth virtual mode with virtual cosmos data. This is one possibility if we have to take seriously in the Copenhagen view that only measurement is meaningful, what happens in between is completely indetermiate as what Jambaugh emphasized using superior mathematics which may be nothing more than features of the programming language used in modelling us. Now I mention all this to make someone refute this hypothesis. By refuting it means arjakad is right that something physically exists.
 
  • #57
Alfrez said:
a mathematical formalism that can be used to calculate probabilities of possible results of experiments

Quantum mechanics does much more than predict probabilities of possible results of experiments.

For example, it is used to predict the color of molecules, their response to external electromagnetic fields, the behavior of material made of these molecules under changes of pressure or temperature, the production of energy from nuclear reactions, the behavior of transistors in the chips on which your computer runs, and a lot more. Most of these predictions have nothing at all to do with collapse.

It is a pity that public reception of quantum mechanics is so much biased towards the queer aspects of quantum systems. The real meaning and the power of quantum mechanics does not come from studying the foundations but from studying the way how QM is applied when put to actual use.
 
  • #58
A. Neumaier said:
Quantum mechanics does much more than predict probabilities of possible results of experiments.

For example, it is used to predict the color of molecules, their response to external electromagnetic fields, the behavior of material made of these molecules under changes of pressure or temperature, the production of energy from nuclear reactions, the behavior of transistors in the chips on which your computer runs, and a lot more. Most of these predictions have nothing at all to do with collapse.

It is a pity that public reception of quantum mechanics is so much biased towards the queer aspects of quantum systems. The real meaning and the power of quantum mechanics does not come from studying the foundations but from studying the way how QM is applied when put to actual use.

I quoted it out of context. The complete sentence is "|u>+|v> doesn't actually represent the properties of the system, but is just a part of a mathematical formalism that can be used to calculate probabilities of possible results of experiments." as "he put it" (which I mentioned). I replaced "|u>+|v>" with simple words "wave function" which created the confusion. Of course I know QM is used in many daily electronics and applications.

You and Jambaugh are bonafide Copenhagen. While Fredrik and others are Many Worlders. So I guess the debates are still valid. Both produce the same experiment outputs. Is there any implication by knowing the correct interpretations. Yes. Unification with General Relativity or comprehending Quantum Spacetime by ontology.
 
  • #59
Alfrez said:
You and Jambaugh are bonafide Copenhagen.

No; I am not.

I have my own interpretation. It is superior to any I found in the literature, since it
-- needs only one world,
-- applies both to single quantum objects (like the sun) and to statistical ensembles,
-- has no split between classical and quantum mechanics,
-- has no collapse (except approximately in non-isolated subsystems),
-- has no concepts beyond what is taught in every QM course.
I call it the the thermal interpretation since it agrees with how one does measurements in thermodynamics (the macroscopic part of QM (derived via statistical mechanics), and therefore explains naturally the classical properties of our quantum world. It is outlined in my slides
http://arnold-neumaier.at/ms/optslides.pdf
and described in detail in Chapter 7 of my book
Classical and Quantum Mechanics via Lie algebras
http://lanl.arxiv.org/abs/0810.1019

Alfrez said:
Is there any implication by knowing the correct interpretations.

The main advantage of having a good interpretation is clarity of thought, which results in saving a lot of time otherwise spent in the contemplation of meaningless or irrelevant aspects arising in poor interpretations.
 

Similar threads

Replies
8
Views
906
Replies
15
Views
1K
  • Quantum Physics
2
Replies
37
Views
4K
Replies
2
Views
909
Replies
2
Views
948
  • Quantum Physics
2
Replies
40
Views
7K
  • Quantum Interpretations and Foundations
Replies
25
Views
991
  • Quantum Physics
Replies
5
Views
6K
Replies
2
Views
3K
Back
Top