Decoherence does not collapse wavefunc.

  • Thread starter ZPower
  • Start date
1
0
So,should we think of decoherence as being a mathematical abstraction rather than a physical process?
 
206
28
As I've come to understand QM, you shouldn't think of the collapse of the wavefunction as a physical process but a conceptual process we apply after the physical act of measurement when we update our information about the system. (Just as you update the value of a Lotto ticket after the drawing, or your suppositions of the likely location of your keys after you see them on the coffee table.)

Likewise superposition is not a physical property of the system but a property of how you are resolving the system in terms of potential observables. A vertically polarized photon "is not in a superposition" of Vert vs Horiz. modes but "is in a superposition" of left circular and right circular polarization modes. It is the modes (of measurement) which superpose not the photon.
I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.

In my view, the superposition state is in fact the "real" state of the system as long as it's in it. For example, take the state |+> = |0> + |1>. If you measure in the computational basis you would find for example |1>, but this does not mean that the measurement is simpy an update of information or that the state was in |1> all the time, like your keys on the table analogy suggests. In the key case they really were on the table all the time, even before the measurement, but in the |+> case this is not true because experiments done to the state before the collapse would yield quite different results between |+> and |1>, in particular measurements in the |+>,|-> basis would find the state |+> 100% of the time.

I tend to think of it more like asking a grey square whether it's black or white, you're bound to get non determined answer, and it's not just a matter of updating the information, the state after the collapse is actually different in a real and measurable way.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
So,should we think of decoherence as being a mathematical abstraction rather than a physical process?
No more so than we should think of entropy as a mathematical abstraction. Entropy has physical meaning but is not an observable of a system. It is rather a quantitative measure of our knowledge about a given system in so far as it is a property of a maximally restrictive class of systems to which we can say a given system belongs as an instance.

[By maximally restrictive, I mean we use all the existant knowledge about the system, not necessarily all simultaneously possible knowledge about the system. In short I'm not talking about necessarily sharp descriptions and in fact the lack of sharpness is what entropy is quantifying. One may refer to a sharp mode too as a maximally restricted class but in this case maximal in the sense of using all possible information not just what is actually known.]

Since decoherence involves an increase in entropy of a system it too is a description of a (maximally restrictive) system class associated with that system.

A class of systems is a mathematical abstraction with perfectly concrete physical meaning when the class is defined in terms of observables. E.g. the class of electrons (specifying mass and charge) for which the z component of spin has been measured at +1/2 and momentum at say some vector value p.

We express that class of systems by writing a wave-function (if it is sharply described as above) or a density operator (which is more general allowing for cases of non-zero entropy). In a laboratory we may instantiate that class (actualize an instance of an electron) which requires physical constraints and measurements.
 
1,444
4
like CSL models.
This is the simplest possibility, with an extremely simple stochastic process. I don't think it is general enough to describe all physical experiments that are being done in the labs. CSL is simple to explain, simple to apply, but it assumes one homogeneous mechanism for all collapses. This is not what we see looking at particle tracks. The collapses are evidently (at least for me) due to the presence of the detectors and there is no need (and not much use) of collapsing the wave function in a vacuum.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.

In my view, the superposition state is in fact the "real" state of the system as long as it's in it. For example, take the state |+> = |0> + |1>.
There are multiple issues here. Letting the "real" issue sit for the moment. The "state" |+> is not in a superposition w.r.t. the |+> vs |-> basis but of course is w.r.t. the |0> vs |1> basis. Hence superposition is not "a property of the system" in an absolute sense but rather a relationship between a given ket and our choice of basis.
If you measure in the computational basis you would find for example |1>,
You might so find. Prior to adding this additional physical assumption you only know the probabilities which is to say you don't know. It is when you actualize the assumption that you "collapse" your knowledge of how the system will subsequently behave. In this sense the quantum collapse is no different from the classical collapse in the case of the glasses....
but this does not mean that the measurement is simpy an update of information or that the state was in |1> all the time, like your keys on the table analogy suggests.
The collapse component is simply an update of information. Since the subsequent measurement is not compatible with the implied previous measurement (|+> vs |->) you simultaneously loose any dependence on that previous measurement for future predictions.

Going back to the glasses analogy for a moment. If I last recall seeing my glasses in my car then my probability distribution for where I most likely will find them will take that into account. But once I see them on the coffee table that old assumption is removed.
In the key case they really were on the table all the time, even before the measurement,
Of course and this is where the "glasses" differ from the quantum system but it doesn't detract from the fact that my knowledge about where the glasses might be has been changed by my observing where they are.
but in the |+> case this is not true because experiments done to the state before the collapse would yield quite different results between |+> and |1>,
You can't have your cake and eat it too. Either you did measure |1> or you didn't. You can't go back in time and undo this. So you are talking cases and not a given system. Once you change the assumption that you did measure |1> vs |0> and that you observed the value |1> you are "uncollapsing" the wave-function... and so you have the prior prediction...
in particular measurements in the |+>,|-> basis would find the state |+> 100% of the time.
Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)

You would still write the |1> wave-function, even to describe the system prior to your measurement. If I then told you I measured a specific observable you would use that |1> wave function to predict the probability of the value I measured and finally if I said I measured |+> you would collapse the wave-function to |+> prior to my measurement to see what "alice" measured before me.

By reversing the sequence of assumptions made, I have totally change where you write the |+> description and where you write the |1> description. Can you still then say these are states of reality? Or are they not truly representations of our knowledge about the system in question?
 
1,457
365
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
 
1,444
4
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
Quantum mechanics, when it was being consceaved was unsure about the collapse. Schrodinger himself was unsure. Then there came applications and QM concentrated on applications that doo not need collapse. The mechanism of forming tracks in cloud chambers was never explained by QM. http://en.wikipedia.org/wiki/Mott_problem" [Broken] discussed probabilities of different tracks but did not say anything about the mechanism itself and about the timing of the events. So physicists decided that one is not supposed to ask about "mechanisms". Why? Because no one (except of Schrodinger, but who cares?) asks such questions.

The model of Belavkin and Melsheimer "http://arxiv.org/abs/quant-ph/0512192" [Broken]" is just one possibility, but it is not completely satisfactory. There are other options available. But this is not the mainstream physics, so the territory is left to the "decoherence teams" - which form the mainstream approach these days.
 
Last edited by a moderator:
206
28
Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)

You would still write the |1> wave-function, even to describe the system prior to your measurement. If I then told you I measured a specific observable you would use that |1> wave function to predict the probability of the value I measured and finally if I said I measured |+> you would collapse the wave-function to |+> prior to my measurement to see what "alice" measured before me.

By reversing the sequence of assumptions made, I have totally change where you write the |+> description and where you write the |1> description. Can you still then say these are states of reality? Or are they not truly representations of our knowledge about the system in question?
When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:

Consider that I start with the state |+>. If I measure in the |+>,|-> basis I would now find the state |+> with 100% probability. Let's now consider what happens if you did a measurement in the |0>,|1> basis without telling me. You would "collapse" the state to one of them, let's just say it happened to be |1>.

Now, without you telling me anything, i.e. my knowledge about the system does not change, I now have a non-zero probability of measuring |-> (50%) if I again measure in my basis. The probability of measuring |-> has thus changed without my knowledge being changed at all.

I can only interpret this as the fact that the physical state has actually changed, which is completely different from any classical analogy, where no amount of information update can ever change the location of neither keys nor glasses.
 
380
1
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
Nonlinear Quantum Mechanics states that collapses occur itself.
 
1,457
365
What is the difference between linear and nonlinear quantum mechanics? Which one is correct?
 
1,444
4
Fuly linear quantum mechanics does not allow for collapse. A mild non-linearity allows you to have collapses.
In fact the description of an individual system that involves a wave function may be nonlinear, but the evolution of statistical ensembles of systems, described by a density matrix, may still be linear.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
As you see there are "interpretational differences". If you hold that "collapse" is a conceptual process then it is meaningless to say "it occurs" but rather one says the theorist "collapses" his description upon new information (my position).

But taking the other side for arguments sake, Quantum mechanics describes the evolution of the system between measurements (or post preparation or pre destructive detection) via unitary operators. The unitarity conserves probability (or in the relativistic setting probability current). The problem with describing a collapse is (whether it be "real" or not) there is in the language of it an assumed update of assumptions from what we can predict for the outcome of a measurement vs what we know when we assume a specific measured value. Even if "collapse has been realized" we will still, until integrating that assumption describe the system as via the equivalent of a density operator. In this setting "collapse" is represented by decoherence. There is a change in the entropy of the representation. This implies a non-unitary (though still linear?) evolution of the system itself during the measurement process.

Basic QM doesn't describe the evolution during measurement, only between measurements and thus doesn't say anything about linearity vs non-linearity of the measurement process nor about the reality of collapse vs virtuality of collapse. It says that after measurement we can update our wave-function to represent the known measured value. If we don't know it but it is still recorded somewhere, we can use a "classically" probabilistic description (density operator) until we access the record.

Now theorists trying to push the envelope have considered non-linear perturbations of QM to see if they can "explain" collapse or measurement. From my position (which is pretty close to the orthodox CI interpretation) this is not an issue. The distinction between classical and quantum physics is one of fundamental format of description. One does not "explain" a change of description. One can express classical physics in the same format as QM and one gets the same "collapses" when one integrates new measurement values. In so doing one sees collapse as being non-physical.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:

Consider that I start with the state |+>. If I measure in the |+>,|-> basis I would now find the state |+> with 100% probability. Let's now consider what happens if you did a measurement in the |0>,|1> basis without telling me. You would "collapse" the state to one of them, let's just say it happened to be |1>.

Now, without you telling me anything, i.e. my knowledge about the system does not change, I now have a non-zero probability of measuring |-> (50%) if I again measure in my basis. The probability of measuring |-> has thus changed without my knowledge being changed at all.

I can only interpret this as the fact that the physical state has actually changed, which is completely different from any classical analogy, where no amount of information update can ever change the location of neither keys nor glasses.
Yes this is quite correct. Measurement is a physical act and it will sometimes change the physical system. If you think of it being in a state then you must say the state has changed (provided it was not already in an eigen-state).

Take your example again and let me tell you what I did physically w.r.t. placing an intervening measuring device but not tell you the recorded outcome and you will get the correct probabilistic predictions of outcomes for your subsequent measurements if we repeat the process over and over to see the relative frequencies.

You will use a density matrix to describe my act of measurement without knowing the measured outcomes. It will give you the same probabilities for your subsequent measurements as you observe.

Now If we take the cases where I measured values, (supposing the predictions of your subsequent measurement were not 50%-50%.) I could make more precise predictions of the probabilities of outcomes for those subsequent to |1> measurements, and likewise for those subsequent to |0> measurements. I will in short see a correlation between your subsequent measurements and my hidden 1 vs 0 records. This means I can better predict individual outcomes than you since I have more information, however it does not invalidate the probability distribution you see. My sharp |1> vs |0> is no less nor more valid than your rho at predicting given the information we individually have available.

I further assert there is no foundational difference between how the kets and the density operators are used. They are neither of them "states" and both representations of probable behavior.

You still didn't address my reversed application of the "mode vectors" ("state vectors" as you'd say). The example shows the time symmetry of the QM and the appropriate time reversed parsing of the experimental predictions and it shows the "kets" changing to different "states" between a given pair of measurements purely because we are reversing our conditional probabilities. It shows to my mind the kets are not referring to states of the system but rather to states of our knowledge about the system.
 
1,444
4
Even if "collapse has been realized" we will still, until integrating that assumption describe the system as via the equivalent of a density operator. In this setting "collapse" is represented by decoherence. There is a change in the entropy of the representation. This implies a non-unitary (though still linear?) evolution of the system itself during the measurement process.
If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.

This completely describes the evolution of a single quantum system under a continuous monitoring.

Yet, if we are not interested in a single quantum system, but care only about averages over an infinite ensemble of similarly prepared systems, only then, if we wish, we do the averaging and get the perfect linear Liouville master equation for the density matrix.

In short:

Single systems are described by collapsing wave functions, ensembles are described by non-collapsing, continuous in time, linear master equation for the density matrix. That's all.
 
27
0
Another question on decoherence: Take stat mechanics. There we have an atomic system and an environment(consisting of atomic subsystems that will be traced over) plus an interaction between them, V(t). The atomic system could be for example an atom and the environment could reprsent collisions with other atoms or particles and the result of the interaction would be a broadening and shifting of its levels, e.g. creating a finite lifetime for the atomic states.
To get a finite lifetime one ABSOLUTELY NEEDS to trace over.

Now take decoherence. We again have an atomic system and an environment. The collapse we get is again triggered by the interaction and again the many degrees of freedom(=huge number of "environmental" particles). What is not so clear is the "tracing over" mechanism in decoherence. What are we tracing over?
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.
You're speaking of a particle track in a cloud chamber. We can describe that track as a sequence of position measurements and indeed speak of the idealized limit of continuous measurement. But the reality is that the track is a discrete sequence of position measurements. This has nothing to say as to the discussions. Yes we can measure the position of a quantum. Yes we can measure it twice, three times, 10^14 times.

Now this non-linearity which you assert. Where is that required and to what level are you applying it? If you want to describe a quantum system with position observable, observed every 10^-5 seconds or so. You have 1 description for the future measurements sans incorporation of the intermediate measurements. You get for that last measurement a nice classical probability distribution. You get for adjacent measurements nice conditional probabilities which incorporate the dynamics and the uncertainties of momenta.

You update your description by inputting say the first or say the 108th position measurement value and you get a different description because you input more information. The description has "collapsed". Input more actualized values and you collapse it more. Eventually you have something which looks very close to a classical particle trajectory but it is still an expression of where you saw bubbles i.e. records of measurements. You still express the measurement within the linear algebra over the Hilbert space. There is no need for nor empirical evidence supporting the introduction of non-linearity in the dynamics at the level of the operator algebra. We already have the non-linearity of the positional dependence we see in all mechanics.

This completely describes the evolution of a single quantum system under a continuous monitoring.
But of course. The description is that of a sequence of measured values (of position). That is all we ever see, measurements. This is why I harp on the fact that assertions of "what goes on" between measurement are meaningless. Rather we can predict outcomes of measurement and evolve our prediction based on known dynamics. The dynamically evolving wave-function (or equivalent density op) are mathematical representations of that array of predictions.

Yet, if we are not interested in a single quantum system, but care only about averages over an infinite ensemble of similarly prepared systems, only then, if we wish, we do the averaging and get the perfect linear Liouville master equation for the density matrix.
So you declare. But why ignore the equivalence of representation, even for single quantum systems? Why are you so opposed to using the mathematical tools best equipped to express both the quality and degree of knowledge we have about how a single system will behave in subsequent measurements?

Here is our fundamental difference. You acknowledge that the density operator is a probabilistic description and thus expresses behavior of an ensemble. Let me use a different word, class instead of ensemble. We should be general enough to not presuppose the prior objective existence of the "ensemble" but rather allow "on the fly" instantiation of members. I can speak of the probability of outcomes of a single die throw because I can instantiate an arbitrary number of throws of that single die. There is no fixed number of outcomes and so I speak of the class of throws and not the set or ensemble.

(For other readers let me refresh memories with the definition: A class is a collection of things defined by common attributes as opposed to sets which are defined purely in terms of membership. i.e. sets must have their elements defined prior to the set definition while classes are defined by the criterion by which an instance is identified as being a member of that class. Thus we cannot prior to measurement say an given electron is an element of the set of electrons with spin z up. After measurement we have used the property of its spin to define its membership in the class of electrons whose spin has been measured as up. The act of measurement and value defines the class and defines the electron as an instance of it.)

Now getting back to quantum theory. How can you define a probability for a single quantum? It will either be measured with one value or another, not an ensemble of values so one cannot speak of the probability of a single quantum's behavior as an intrinsic property of that one reality. Similarly we cannot observer say an interference pattern for a single quantum. It just goes "blip" leaving a single position record. Rather one speaks of the class of equivalent quanta and the frequency of outcomes for that class which we can repeatedly instantiate by virtue of a source of such quanta to which we may affix a symbol [tex]\psi_0[/tex]. The "ket" or Hilbert space vector or wave-function from which we calculate various transition probabilities or measurement probabilities is a symbol attached to a source of individual quanta. The wave-function is as much an representation of an "ensemble" as is a density operator. The interference pattern of the wave-function like the probability of any outcome can only be confirmed by an ensemble of experiments, not a single instance.

This I assert is the only interpretation consistent with operational usage. That the wave-function and density operator both, are the quantum mechanical analogue of a classical probability distribution.

In short:

Single systems are described by collapsing wave functions, ensembles are described by non-collapsing, continuous in time, linear master equation for the density matrix. That's all.
In short, single systems are prepared in such a way that we know they are members of a class of systems which we represent by a wave-function. Under measurement, given the fact that an act of measurement is a physical interaction, we update the class of system to which we assign the single system being described. Sometimes with less than maximal information the most accurate available class description is not a wave-function but a density operator. That is all.

Now my description is less assertive than yours, do you agree? We both agree we can speak of a class of systems "with the same wave-function" right?

If you can bring yourself to acknowledge that it is possible, and useful to sometimes... upon occasion, speak of a class of quantum systems with the same set of values for a given complete observable, and hence the same wave-function, then can you explain to me, other for personal spiritual reasons, how you can say this is ever not the case?
 
1,444
4
If you want to describe a quantum system with position observable, observed every 10^-5 seconds or so. [/tex]
In a cloud chamber it is not you who decides how often the the records are being made. It is decided by the coupling. The timing is random is part of the random process.

You update your description by inputting say the first or say the 108th position measurement value and you get a different description because you input more information. The description has "collapsed". Input more actualized values and you collapse it more.
I am not imputing anything. All is done through the coupling. What I do is - at the end I may have a look at the track.

Eventually you have something which looks very close to a classical particle trajectory but it is still an expression of where you saw bubbles i.e. records of measurements. You still express the measurement within the linear algebra over the Hilbert space. There is no need for nor empirical evidence supporting the introduction of non-linearity in the dynamics at the level of the operator algebra.
Try to accomplish the above with a linear process and show me the result.


[QUOTE}Now getting back to quantum theory. How can you define a probability for a single quantum?
I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed usin statistical procedures applied to your photographs each with 10000 dots.
Is there a probability involved? Somehow is, but it is hidden.
The same when you compare two tracks in an external field. They are not the same. And yet they have similar "features". For instance the average distance between dots, approximately the same curvature, when you average etc. Is probability involved? Somehow is, but it is hidden in the application of statistics to finite samples.

If you can bring yourself to acknowledge that it is possible, and useful to sometimes... upon occasion, speak of a class of quantum systems with the same set of values for a given complete observable, and hence the same wave-function, then can you explain to me, other for personal spiritual reasons, how you can say this is ever not the case?
I prefer down to earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.

I am stressing the importance of timing - which is usually dynamical and not by "instantaneous measurement at chosen time" from the textbooks. Textbooks do not know how to deal with the dynamical timing - which a standard in the labs.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed using statistical procedures applied to your photographs each with 10000 dots.
Is there a probability involved? Somehow is, but it is hidden.
The same when you compare two tracks in an external field. They are not the same. And yet they have similar "features". For instance the average distance between dots, approximately the same curvature, when you average etc. Is probability involved? Somehow is, but it is hidden in the application of statistics to finite samples.
Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.

I prefer down to earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.
The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.

I am stressing the importance of timing - which is usually dynamical and not by "instantaneous measurement at chosen time" from the textbooks. Textbooks do not know how to deal with the dynamical timing - which a standard in the labs.
The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.

The representation of measurement goes beyond "instantaneous" as I pointed out in the (logically) reversed representation of an experiment. I'll repeat in more detail:

Consider a single experimental setup. A quantum is produced from a random source, a sequence of measurements are made, A then B then C, (which take time and room on the lab's optical bench or whatever) and then final system detector registers the system to assure a valid experiment. If you like you can consider intermediate dynamics as well but for now lets keep it simple.

What does theory tell us about the sequence of measurements? Firstly there is randomness in outcomes. Secondly there is correlation of measured values. How are they correlated? QM says...

[tex] Prob(B=b |A=a) = |\langle b|a\rangle |^2=Tr(\rho_b \rho_a)[/tex]
[tex] Prob(C=c |B=b) = |\langle c|b\rangle |^2=Tr(\rho_c \rho_b)[/tex]
But then only if the measurements are complete i.e. non-degenerate (unless you're using density operators in which case everything works fine.)

We can reverse the conditional probabilities:
[tex]Prob(A=a|B=b) = |\langle a|b\rangle |^2=|\langle b|a\rangle |^2[/tex]
et cetera.

When we point to the lab bench at the region between measuring device A and measuring device B we might say "state [tex]|a\rangle[/tex]" but that's just saying that over at measuring device A we registered "a" and so that is the condition on subsequent measurements (whether they be B, or C or D). We can similarly point to that same region and say "state [tex] \langle b|[/tex]" but we'd mean that a subsequent measurement "b" is made and so this is the condition on prior measurements (be they A, or C or D). Whether we are "forward tracking" or "back tracking" the causal correlation between measurements, we are expressing these correlations via the "bras" and "kets", not modeling a physical state of the system, and we can only confirm we are using the correct ones by carrying out many measurements thus they represent at best classes of systems.

e.g. [tex]|a\rangle[/tex] is the class of systems for which A has been measured with value 'a'.

Now the business with collapse is a matter of transitioning from the point where we make a measurement and acknowledge a particular value that was measured. You can say it this way:

"We consider an ensemble of systems for which A was observed and consider the subset for which A=a" Here we "collapse" to the subset of a fixed ensemble.

Or we can speak in the singular.
"We consider a single quantum for which A is observed, and then..." wait for it .... "we consider the case of an actual measured value of A=a." Now we know the quantum is, for the purposes of subsequent measurements, in the class of those for which a measured value A=a has occurred.

We collapse the class to which we assign the single quantum for our purposes of making subsequent predictions. The collapse is not itself a physical act it is a conceptual step we make corresponding to the physical act of measurement. That measurement may be delayed, may take a very short time or may take an arbitrarily long time. The details are unimportant to the conceptual process of incorporating that information (or as is more typical considering a hypothetical possibility.)

Your humble stochastic simulations are fine research --I am sure-- but please refer to the physical processes by their rightful name, "interaction", not "collapse".
 
1,444
4
Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.
I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.

The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.
I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.

The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.
The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.

The representation of measurement goes beyond "instantaneous" as I pointed out in the (logically) reversed representation of an experiment. I'll repeat in more detail:

Consider a single experimental setup. A quantum is produced from a random source, a sequence of measurements are made, A then B then C, (which take time and room on the lab's optical bench or whatever) and then final system detector registers the system to assure a valid experiment. If you like you can consider intermediate dynamics as well but for now lets keep it simple.

What does theory tell us about the sequence of measurements?
It tells us absolutely nothing about the timing. You are consistently neglecting this issues.

Your humble stochastic simulations are fine research --I am sure-- but please refer to the physical processes by their rightful name, "interaction", not "collapse".
In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.
Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.
I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.
I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems. There and only there can you compare with experiment. You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality". It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".

The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.
Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons). Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.

I think you misuse the term "collapse" where you should be speaking of "decoherence" which is the physical process (of external random physical variables i.e. "noise" being introduced into the physical system.)

It tells us absolutely nothing about the timing. You are consistently neglecting this issues.
And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase. I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then lets but in a different thread as that is quite a topic in itself.


In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".
"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.

I think your prejudice against DO's (describing a single system) is what is keeping you from understanding this fully. The dynamics of the coupling of system to episystem can be expressed via a hamiltonian on the composite system + episys. and then tracing over the "epi" part yields a non-sharp and "decohering" system description...but again only expressible as a density operator.

Again I submit when you speak of a "wave function valued random variable" (which it seems to me you are using) you are effectively describing a density operator.

Consider a random distribution of Hilbert space vectors with corresponding probabilities:
[tex] \{(\psi_1,p_1),(\psi_2,p_2),\cdots\}[/tex]
it is equivalently realized as a density operator:
[tex]\rho = \sum_k p_k \rho_k[/tex]
where
[tex]\rho_k = \psi_k\otimes\psi^\dagger_k.[/tex]

That IS what the density operator represents pragmatically and within the modern literature. Yes when we speak of a (random) ensemble of systems we must use density operators but that isn't the D.O.'s definition. A probability can be associated with a single system in that it expresses our knowledge about that system in the format of: to what class of systems that one belongs. In expressing this we understand the definition of the value of a probability comes from the class not from the singular system. A D.O. is a probability distribution over a set of Hilbert space vectors e.g. wave-functions.
 
1,444
4
Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.

I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems.
It matches more than that. It matches in also the fact that in real words probabilities are calculated out of the counting and averaging of characteristics of single events and not out of the calculating of integrals. Those who neglect that fact are deliberately blind to a part of the reality. They say: "we need just tools for calculating numbers". Well, that's their choice.

You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality".
There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".

It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".
QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.

Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons).
Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?

Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.
They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.

I think you misuse the term "collapse" where you should be speaking of "decoherence" which is the physical process (of external random physical variables i.e. "noise" being introduced into the physical system.)
"Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.

And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase.
It is not a physical phrase. "Timing of the event" is such. But they alsways come in pairs.

I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then lets but in a different thread as that is quite a topic in itself.
Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.

"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.
You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?

I think your prejudice against DO's (describing a single system) is what is keeping you from understanding this fully. The dynamics of the coupling of system to episystem can be expressed via a hamiltonian on the composite system + episys. and then tracing over the "epi" part yields a non-sharp and "decohering" system description...but again only expressible as a density operator.
It is not so much my prejudice. It's my conscious choice.

Again I submit when you speak of a "wave function valued random variable" (which it seems to me you are using) you are effectively describing a density operator.
Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.

Consider a random distribution of Hilbert space vectors with corresponding probabilities:
[tex] \{(\psi_1,p_1),(\psi_2,p_2),\cdots\}[/tex]
it is equivalently realized as a density operator:
[tex]\rho = \sum_k p_k \rho_k[/tex]
where
[tex]\rho_k = \psi_k\otimes\psi^\dagger_k.[/tex]
This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.

That IS what the density operator represents pragmatically and within the modern literature. Yes when we speak of a (random) ensemble of systems we must use density operators but that isn't the D.O.'s definition.
No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.

A probability can be associated with a single system in that it expresses our knowledge about that system in the format of: to what class of systems that one belongs. In expressing this we understand the definition of the value of a probability comes from the class not from the singular system. A D.O. is a probability distribution over a set of Hilbert space vectors e.g. wave-functions.
Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.
 

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
Pardon the long delay in reply, I've been tied up with the holidays and family...
...
There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".
Then you see no distinction between belief in voodoo and belief in atoms. There is so much wrong with this statement I don't know where to begin.
QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.
Resistant or not, what you can calculate doesn't validate the identification of your calculus with "reality", especially when there exists multiple methods of calculation. Reality is not the mathematics it is the empirical assumptions which cannot be ignored. I can ignore your stochastic processes without any loss in the fidelity of the predictions of QM.
Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?
So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)
They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.
OMG you are a Platonist? No wonder....
You say "Platonic part of the world" I say "on paper". Are we just arguing semantics or do you actually believe there is a real universe of mathematical forms?

BTW we could calculate orbits prior to the development of differential calculus. We simply extended into the future the epicycle series matching prior observations. Of course the differential calculus is superior as it relates the behavior to e.g. the masses of the bodies and thus eliminating the infinite series of variables which must be determined empirically....

and yet again, when you speak of "the time of the collapse" as if you can observe physical collapse, I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.

"Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.
I placed some of these terms in quotes, because they were common usage synonyms for the sharper ones. But YES "Random variable" has a specific sharp meaning, the symbol representing outcomes of a class of empirical events, specifically outcomes to which we can assign probabilities. And "External" has a perfectly well defined operational meaning. We can isolate a system from external effects without changing the system itself (as a class, i.e. defined by its spectrum of observables and degrees of freedom).

What is more important "external" and "noise" have distinct operational meanings. You can "externally" inject "noise" into a system and see the effect. What meaning is there for "collapse" except as a calculation procedure?

Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.
Very good. That's progress. Now then you agree there is a "collapse on paper" but you seem to be saying there is also a "collapse in reality" which the paper process is representing. Correct?

You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?
"anything better" is a value judgment. Let us establish the value judgment within which we work as physicists. I say "there can't be anything better" specifically in the context of the prediction of physical outcomes of experiments and observables. By what value system do you claim something that is "better"?
It is not so much my prejudice. It's my conscious choice.
A prejudice may or may not be a conscious choice. The point is that it is an a priori judgment. Revisit it, and ask instead what is the justification for that judgment. I know a man who consciously ignores the evidence of evolution because it might undermine his faith in the literal "truth" of the bible. Are you doing the same w.r.t. density operators?

I keep bringing these up because, like using differential equations instead of epicycles, they provide more insight into what is mathematically necessary to predict physical events. What is excised by their use vs wave functions, must not necessarily be a component of physical "reality". Most importantly one finds there is no distinction between a "quantum probability" vs a "classical probability" and so no distinction in the interpetation of their "collapse (on paper)". (which recall was the reason I brought them up to begin with.)

Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.
Yes you have more components to play with (like with epicycles you have more variables to tweak). The important point is that with the DO's you have less yet no loss of predictive information. Thus the "more" you refer to is not linked or linkable to any empirical phenomena. Does it then still have physical meaning in your considered opinion?

This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.
Again see my point above... what utility does this procedure have if it does not change what one can empirically predict? (I do not deny it might have some utility but I call your attention to the nature of that utility if it does manifest.)
No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.
Yes you can do what you like as a person but are you then doing physics or astrology? To express the maximal known information about a system in terms of usage common to the physics community you really really should use density operators as they are understood in that community.
Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.
Then you are on a speculative quest. That is fine and good. But acknowledge that you speculate instead of declaring the orthodox to be "wrong". When you find that mechanism and can justify the superiority of believing the reality of it then come back.

Let me recall for you the thousands of amateur "theoriests" which post on the various blogs and forums about how "Einstein is wrong because I can predict what he predicts by invoking an aether". They justify their noisy insistent proclamations by saying they're "seeking a mechanism to explain".... an explanation is always in terms of other phenomena and when someone seeks to explain in terms of fundamentally unobservable phenomena there is no merit in it.

Yes I am a positivist when it comes to physics. Pure deduction can only link between prepositions, it cannot generate knowledge on its own. However too many times we find implicit hidden axioms in the logic of arguments about nature. Under further scrutiny we find those implicit axioms are chosen out of wish fulfillment to justify the desired conclusions. The only way to avoid this is to adhere to a positivistic discipline, stick to terms which either have operational meaning or explicitly mathematical meaning.

If one does not grant "reality status" to wave function in the form e.g. of Bhomian pilot waves then there is no need to explain collapse, it is explained already in the paper version in a simple trivially obvious way.

The chain of explanation must stop somewhere. it isn't "http://en.wikipedia.org/wiki/Turtles_all_the_way_down" [Broken]". I see that quantum mechanics is as it is because it is the limit of our ability to explain in terms of more fundamental empirical phenomena. As the mathematician must eventually stop the chain of formal definition at the point of fundamental undefined terms so too the physicist must stop the chain of explanation at the point of fundamental unexplained phenomena. At that level physics must remain positivistic.
 
Last edited by a moderator:
1,444
4
So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)
You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.

The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.

So, your war is misdirected.
 
Last edited:

jambaugh

Science Advisor
Insights Author
Gold Member
2,175
231
You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.

The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.

So, your war is misdirected.
What you say here is correct w.r.t. calculations yielding observable predictions. But the validity of a calculation does not imply the reality of the mathematical objects or processes used.

Specifically the calculation step "collapse the wave-function" does not, just by virtue of giving correct empirical predictions, imply there is a physical collapse occuring. It is thus incorrect to speak of "when the collapse occurs" or "cause of a collapse" as if it were physical.

That is what I have been consistently attacking and the issue you keep sidestepping.

There is a distinct physical process of decoherence which one can express easily in the density operator language (which you resist accepting) which is not the same as collapse and indeed shows that classical and quantum collapse are indistinguishable. (Classical collapse being the baysian updating of probabilities given subsequent observations.)
 

Related Threads for: Decoherence does not collapse wavefunc.

Replies
5
Views
5K
  • Last Post
3
Replies
53
Views
2K
Replies
10
Views
2K
Replies
40
Views
5K
  • Last Post
Replies
6
Views
3K
Replies
7
Views
2K
Replies
32
Views
2K
Replies
34
Views
2K

Hot Threads

Recent Insights

Top