- 1
- 0
So,should we think of decoherence as being a mathematical abstraction rather than a physical process?
I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.As I've come to understand QM, you shouldn't think of the collapse of the wavefunction as a physical process but a conceptual process we apply after the physical act of measurement when we update our information about the system. (Just as you update the value of a Lotto ticket after the drawing, or your suppositions of the likely location of your keys after you see them on the coffee table.)
Likewise superposition is not a physical property of the system but a property of how you are resolving the system in terms of potential observables. A vertically polarized photon "is not in a superposition" of Vert vs Horiz. modes but "is in a superposition" of left circular and right circular polarization modes. It is the modes (of measurement) which superpose not the photon.
No more so than we should think of entropy as a mathematical abstraction. Entropy has physical meaning but is not an observable of a system. It is rather a quantitative measure of our knowledge about a given system in so far as it is a property of a maximally restrictive class of systems to which we can say a given system belongs as an instance.So,should we think of decoherence as being a mathematical abstraction rather than a physical process?
This is the simplest possibility, with an extremely simple stochastic process. I don't think it is general enough to describe all physical experiments that are being done in the labs. CSL is simple to explain, simple to apply, but it assumes one homogeneous mechanism for all collapses. This is not what we see looking at particle tracks. The collapses are evidently (at least for me) due to the presence of the detectors and there is no need (and not much use) of collapsing the wave function in a vacuum.like CSL models.
There are multiple issues here. Letting the "real" issue sit for the moment. The "state" |+> is not in a superposition w.r.t. the |+> vs |-> basis but of course is w.r.t. the |0> vs |1> basis. Hence superposition is not "a property of the system" in an absolute sense but rather a relationship between a given ket and our choice of basis.I'm not sure if I just didn't understand your meaning properly, but I don't quite agree with that description.
In my view, the superposition state is in fact the "real" state of the system as long as it's in it. For example, take the state |+> = |0> + |1>.
You might so find. Prior to adding this additional physical assumption you only know the probabilities which is to say you don't know. It is when you actualize the assumption that you "collapse" your knowledge of how the system will subsequently behave. In this sense the quantum collapse is no different from the classical collapse in the case of the glasses....If you measure in the computational basis you would find for example |1>,
The collapse component is simply an update of information. Since the subsequent measurement is not compatible with the implied previous measurement (|+> vs |->) you simultaneously loose any dependence on that previous measurement for future predictions.but this does not mean that the measurement is simpy an update of information or that the state was in |1> all the time, like your keys on the table analogy suggests.
Of course and this is where the "glasses" differ from the quantum system but it doesn't detract from the fact that my knowledge about where the glasses might be has been changed by my observing where they are.In the key case they really were on the table all the time, even before the measurement,
You can't have your cake and eat it too. Either you did measure |1> or you didn't. You can't go back in time and undo this. So you are talking cases and not a given system. Once you change the assumption that you did measure |1> vs |0> and that you observed the value |1> you are "uncollapsing" the wave-function... and so you have the prior prediction...but in the |+> case this is not true because experiments done to the state before the collapse would yield quite different results between |+> and |1>,
Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)in particular measurements in the |+>,|-> basis would find the state |+> 100% of the time.
Quantum mechanics, when it was being consceaved was unsure about the collapse. Schrodinger himself was unsure. Then there came applications and QM concentrated on applications that doo not need collapse. The mechanism of forming tracks in cloud chambers was never explained by QM. http://en.wikipedia.org/wiki/Mott_problem" [Broken] discussed probabilities of different tracks but did not say anything about the mechanism itself and about the timing of the events. So physicists decided that one is not supposed to ask about "mechanisms". Why? Because no one (except of Schrodinger, but who cares?) asks such questions.Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:Consider it this way. Suppose you did make the [1] measurement but did so to a given system after I had measured it (but haven't yet told you what observable I measured nor what value I got.)
You would still write the |1> wave-function, even to describe the system prior to your measurement. If I then told you I measured a specific observable you would use that |1> wave function to predict the probability of the value I measured and finally if I said I measured |+> you would collapse the wave-function to |+> prior to my measurement to see what "alice" measured before me.
By reversing the sequence of assumptions made, I have totally change where you write the |+> description and where you write the |1> description. Can you still then say these are states of reality? Or are they not truly representations of our knowledge about the system in question?
Nonlinear Quantum Mechanics states that collapses occur itself.Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
As you see there are "interpretational differences". If you hold that "collapse" is a conceptual process then it is meaningless to say "it occurs" but rather one says the theorist "collapses" his description upon new information (my position).Qunatum Mechanics doesn't state a collapse will occur - and if the theory holds then a collapse never occurs - correct? When we say the wavefunction has collapsed, it really hasn't?
Yes this is quite correct. Measurement is a physical act and it will sometimes change the physical system. If you think of it being in a state then you must say the state has changed (provided it was not already in an eigen-state).When I think of an example where you measure on the state without telling me I get the opposite conclusion, explained by the following:
Consider that I start with the state |+>. If I measure in the |+>,|-> basis I would now find the state |+> with 100% probability. Let's now consider what happens if you did a measurement in the |0>,|1> basis without telling me. You would "collapse" the state to one of them, let's just say it happened to be |1>.
Now, without you telling me anything, i.e. my knowledge about the system does not change, I now have a non-zero probability of measuring |-> (50%) if I again measure in my basis. The probability of measuring |-> has thus changed without my knowledge being changed at all.
I can only interpret this as the fact that the physical state has actually changed, which is completely different from any classical analogy, where no amount of information update can ever change the location of neither keys nor glasses.
If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.Even if "collapse has been realized" we will still, until integrating that assumption describe the system as via the equivalent of a density operator. In this setting "collapse" is represented by decoherence. There is a change in the entropy of the representation. This implies a non-unitary (though still linear?) evolution of the system itself during the measurement process.
we dont know yet....linear and nonlinear quantum mechanics? Which one is correct?
You're speaking of a particle track in a cloud chamber. We can describe that track as a sequence of position measurements and indeed speak of the idealized limit of continuous measurement. But the reality is that the track is a discrete sequence of position measurements. This has nothing to say as to the discussions. Yes we can measure the position of a quantum. Yes we can measure it twice, three times, 10^14 times.If the collapse would be described mathematically in this way - then we would certainly have a problem. But it can be described in a different way. Collapse happens objectively - as it leaves an objective "track", the wave function changes in a mildly nonlinear way, then it continues its non-unitary evolution until the next collapse etc. The nonunitarity is negligible far from the detectors, the evolution is the standard and unitary in an empty space without detectors.
But of course. The description is that of a sequence of measured values (of position). That is all we ever see, measurements. This is why I harp on the fact that assertions of "what goes on" between measurement are meaningless. Rather we can predict outcomes of measurement and evolve our prediction based on known dynamics. The dynamically evolving wave-function (or equivalent density op) are mathematical representations of that array of predictions.This completely describes the evolution of a single quantum system under a continuous monitoring.
So you declare. But why ignore the equivalence of representation, even for single quantum systems? Why are you so opposed to using the mathematical tools best equipped to express both the quality and degree of knowledge we have about how a single system will behave in subsequent measurements?Yet, if we are not interested in a single quantum system, but care only about averages over an infinite ensemble of similarly prepared systems, only then, if we wish, we do the averaging and get the perfect linear Liouville master equation for the density matrix.
In short, single systems are prepared in such a way that we know they are members of a class of systems which we represent by a wave-function. Under measurement, given the fact that an act of measurement is a physical interaction, we update the class of system to which we assign the single system being described. Sometimes with less than maximal information the most accurate available class description is not a wave-function but a density operator. That is all.In short:
Single systems are described by collapsing wave functions, ensembles are described by non-collapsing, continuous in time, linear master equation for the density matrix. That's all.
I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed usin statistical procedures applied to your photographs each with 10000 dots.If you want to describe a quantum system with position observable, observed every 10^-5 seconds or so. [/tex]
In a cloud chamber it is not you who decides how often the the records are being made. It is decided by the coupling. The timing is random is part of the random process.
I am not imputing anything. All is done through the coupling. What I do is - at the end I may have a look at the track.You update your description by inputting say the first or say the 108th position measurement value and you get a different description because you input more information. The description has "collapsed". Input more actualized values and you collapse it more.
Try to accomplish the above with a linear process and show me the result.Eventually you have something which looks very close to a classical particle trajectory but it is still an expression of where you saw bubbles i.e. records of measurements. You still express the measurement within the linear algebra over the Hilbert space. There is no need for nor empirical evidence supporting the introduction of non-linearity in the dynamics at the level of the operator algebra.
[QUOTE}Now getting back to quantum theory. How can you define a probability for a single quantum?
I prefer down to earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.If you can bring yourself to acknowledge that it is possible, and useful to sometimes... upon occasion, speak of a class of quantum systems with the same set of values for a given complete observable, and hence the same wave-function, then can you explain to me, other for personal spiritual reasons, how you can say this is ever not the case?
Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.I am describing the stochastic process that reproduces what we see, including the timing of the events. You can compare my simulation with experiment. And how you compare two results of an experiments? You have two photographs of an interfence pattern with 10000 electrons each time. One done on Monday and one on Tuesday. Of course the dots are in different places. And yet you notice that both describe the same phenomenon. How? Because you neglect the exact places and compare statistical distributions computed using statistical procedures applied to your photographs each with 10000 dots.
Is there a probability involved? Somehow is, but it is hidden.
The same when you compare two tracks in an external field. They are not the same. And yet they have similar "features". For instance the average distance between dots, approximately the same curvature, when you average etc. Is probability involved? Somehow is, but it is hidden in the application of statistics to finite samples.
The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.I prefer down to earth approach - comparing simulations based on a theory with real data coming from rel experiments. I have nothing against classes. But for me the success of any theory is in being able to simulate processes that we observe in our labs.
The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.I am stressing the importance of timing - which is usually dynamical and not by "instantaneous measurement at chosen time" from the textbooks. Textbooks do not know how to deal with the dynamical timing - which a standard in the labs.
I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.Your simulation matches experiments only in the aggregate, (same relative frequencies, same lines of cloud chamber bubbles but not identical individual outcomes) thus your inference is again about classes of individual quanta. I'm sure you're doing good work but my objections are to how you use the term "collapse". If you are simulating entanglement then you are positively not simulating the physical states of the quantum systems since you would necessarily satisfy Bell's inequality and/or failing to get the proper correlations). You would need to be simulating the (probability) distributions of outcomes directly which would involve nothing more than doing the QM calculations.
I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.The issue is what the theory says, the semantics of the language you use. Words mean things. I can simulate a given probability distribution but that won't mean the internals of my simulation correspond to a physical process which upon repetition match that distribution. My point is that the theory matches what goes on in the lab only in so far as it makes probabilistic predictions, quite accurate ones, but only for aggregates of (and hence classes of) experiments.
The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.The fact that you think the measurement is an instantaneous process as represented in the textbooks is where I see you misinterpreting. The mathematics is instantaneous because it represents something one level of abstraction above the the physical process, namely the logic of the inferences we make about predictions. (There is no "timing" in mathematics 2+2=4 eternally.) The "collapse problem" is not with the theory but with the mind misunderstanding to what a specific component of the theory is referring.
It tells us absolutely nothing about the timing. You are consistently neglecting this issues.The representation of measurement goes beyond "instantaneous" as I pointed out in the (logically) reversed representation of an experiment. I'll repeat in more detail:
Consider a single experimental setup. A quantum is produced from a random source, a sequence of measurements are made, A then B then C, (which take time and room on the lab's optical bench or whatever) and then final system detector registers the system to assure a valid experiment. If you like you can consider intermediate dynamics as well but for now lets keep it simple.
What does theory tell us about the sequence of measurements?
In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".Your humble stochastic simulations are fine research --I am sure-- but please refer to the physical processes by their rightful name, "interaction", not "collapse".
Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.I am sure I am getting all the correlations that are seen in experiments. I do not care about Bell inequalities which do not even address the continuous monitoring of single quantum systems.
I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems. There and only there can you compare with experiment. You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality". It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".I am not talking about simulating of probability distributions. I am talking about stochastic processes and their trajectories in time.
Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons). Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.The collapse is a part of a stochastic process. Sometimes we have one collapse - the time of the collapse is always a random variable. That is what the standard approach to QM does not takes into account - because of the historical reasons and because of the inertia of human thought.
And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase. I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then lets but in a different thread as that is quite a topic in itself.It tells us absolutely nothing about the timing. You are consistently neglecting this issues.
"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.In fact, I do not the term "interaction", because interaction is usually understood as a "Hamiltonian interaction". I prefer the term "non-Hamiltonian coupling".
It matches more than that. It matches in also the fact that in real words probabilities are calculated out of the counting and averaging of characteristics of single events and not out of the calculating of integrals. Those who neglect that fact are deliberately blind to a part of the reality. They say: "we need just tools for calculating numbers". Well, that's their choice.Bell's inequalities (and their violation) are about correlations, if you don't care then you don't care.
I know you are not talking about it but that is what you are doing. You are saying your computer model stochastic process matches the probability distributions for physical systems.
There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".You speak of "collapse" but there's no reason to believe the "collapses" in your stochastic model matches anything "out there in actuality".
QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.It is the old classic phenomenologist's barrier. "We can only know what we experience." Yes it is too restrictive for science in general. At the classical scale we can infer beyond the pure experience but QM specifically pushes us to the level where that barrier is relevant and we must be more the positivist or devolve into arguments over "how many angels can dance on the head of a pin".
Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?Yes the collapse is a part of a stochastic process, but that process is a conceptual process, (your model or mine) not a physical process (actual electrons).
They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.Again you speak of "the time of the collapse" as if you can observe physical collapse and again I ask "HOW?" Until then the "why QM does not take this into account" question lacks foundation.
"Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.I think you misuse the term "collapse" where you should be speaking of "decoherence" which is the physical process (of external random physical variables i.e. "noise" being introduced into the physical system.)
It is not a physical phrase. "Timing of the event" is such. But they alsways come in pairs.And I'm explaining why it not only can be neglected but should be. The timing of "collapse" is not a physically meaningful phrase.
Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.I can collapse the wave-function (on paper) at any time I choose after the measurement is made. If you'd like to discuss the physical process of measurement itself then lets but in a different thread as that is quite a topic in itself.
You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?"Coupling" is "interaction", Hamiltonians are how we represent the evolution of the whole composite of the two systems being coupled. When you focus on part of that whole you loose the Hamiltonian format but it is still an interaction. You can still work nicely with this focused case using ... pardon my bringing this up again... density matrices and a higher order algebra. The density operators can still be "evolved" linearly but no longer with a adjoint action of Hamiltonian within the original operator algebra. You see then decoherence occur (the entropy of the DO increases over time, representing this random stochastic process you're modeling). I think you'd find it of value to determine exactly how your computer models of stochastic processes differs from or is equivalent to this sort of representation.
It is not so much my prejudice. It's my conscious choice.I think your prejudice against DO's (describing a single system) is what is keeping you from understanding this fully. The dynamics of the coupling of system to episystem can be expressed via a hamiltonian on the composite system + episys. and then tracing over the "epi" part yields a non-sharp and "decohering" system description...but again only expressible as a density operator.
Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.Again I submit when you speak of a "wave function valued random variable" (which it seems to me you are using) you are effectively describing a density operator.
This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.Consider a random distribution of Hilbert space vectors with corresponding probabilities:
[tex] \{(\psi_1,p_1),(\psi_2,p_2),\cdots\}[/tex]
it is equivalently realized as a density operator:
[tex]\rho = \sum_k p_k \rho_k[/tex]
where
[tex]\rho_k = \psi_k\otimes\psi^\dagger_k.[/tex]
No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.That IS what the density operator represents pragmatically and within the modern literature. Yes when we speak of a (random) ensemble of systems we must use density operators but that isn't the D.O.'s definition.
Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.A probability can be associated with a single system in that it expresses our knowledge about that system in the format of: to what class of systems that one belongs. In expressing this we understand the definition of the value of a probability comes from the class not from the singular system. A D.O. is a probability distribution over a set of Hilbert space vectors e.g. wave-functions.
Then you see no distinction between belief in voodoo and belief in atoms. There is so much wrong with this statement I don't know where to begin....
There are no reasons to believe anything. Each believe is just a personal choice. Like choosing "we only need to know how to calculate numbers and nothing more".
Resistant or not, what you can calculate doesn't validate the identification of your calculus with "reality", especially when there exists multiple methods of calculation. Reality is not the mathematics it is the empirical assumptions which cannot be ignored. I can ignore your stochastic processes without any loss in the fidelity of the predictions of QM.QM "pushes" some physicists and some philosphers into what you call "positivism", but some are more resistant than others. But even so, the "event" based model can calculate more than the posivitistic "don't ask questions, just calculate" model. So, also with a positivistic attitude you are behind.
So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)Well, Hilbert spaces, wave functions, operators, spacetime metrics, are also conceptual. So what?
OMG you are a Platonist? No wonder....They always come in pairs: collapse, event). We observe events. Collapses are in the Platonic part of the world. Nevertheless if you want to simulate events you need the collapses. Like in order to calculate orbits of planets you need to solve differential equations. Differential equations are in the Platonic world as well.
I placed some of these terms in quotes, because they were common usage synonyms for the sharper ones. But YES "Random variable" has a specific sharp meaning, the symbol representing outcomes of a class of empirical events, specifically outcomes to which we can assign probabilities. And "External" has a perfectly well defined operational meaning. We can isolate a system from external effects without changing the system itself (as a class, i.e. defined by its spectrum of observables and degrees of freedom)."Random variables"? "External"? "noise"? Are these better or sharper terms? I strongly doubt.
Very good. That's progress. Now then you agree there is a "collapse on paper" but you seem to be saying there is also a "collapse in reality" which the paper process is representing. Correct?Right. You can collapse wave-function on paper and you can erase diffrential equation on paper. This will not destroy the planet's orbit.
"anything better" is a value judgment. Let us establish the value judgment within which we work as physicists. I say "there can't be anything better" specifically in the context of the prediction of physical outcomes of experiments and observables. By what value system do you claim something that is "better"?You can play with density matrices, but they will not let you to understand and to simulate the observed behavior of a unique physical system. You may deliberately abandon that, you may decide "I don't need it, I don't care", but even in this case I am pretty sure that is a forced choice. You choose it because you do not know anything better than that. You even convince yourself that there can't be anything better. But what if there can be?
A prejudice may or may not be a conscious choice. The point is that it is an a priori judgment. Revisit it, and ask instead what is the justification for that judgment. I know a man who consciously ignores the evidence of evolution because it might undermine his faith in the literal "truth" of the bible. Are you doing the same w.r.t. density operators?It is not so much my prejudice. It's my conscious choice.
Yes you have more components to play with (like with epicycles you have more variables to tweak). The important point is that with the DO's you have less yet no loss of predictive information. Thus the "more" you refer to is not linked or linkable to any empirical phenomena. Does it then still have physical meaning in your considered opinion?Well, it is like saying: when you speak of a function, you effectively speak about its integral. In a sense you are right, but knowing a function you can do with more than just computing one of its characteristics.
Again see my point above... what utility does this procedure have if it does not change what one can empirically predict? (I do not deny it might have some utility but I call your attention to the nature of that utility if it does manifest.)This is one way. Now, try to go uniquely from your density matrix to the particular realization of the stochastic process. You know it can't be done. Therefore there is more potential information in the process than in the Markov semi-group equation.
Yes you can do what you like as a person but are you then doing physics or astrology? To express the maximal known information about a system in terms of usage common to the physics community you really really should use density operators as they are understood in that community.No, I don't have to. Like having a function I don't have to calculate it's integral. I can be more interested in its derivative, for example. Or I can modify its values on some interval.
Then you are on a speculative quest. That is fine and good. But acknowledge that you speculate instead of declaring the orthodox to be "wrong". When you find that mechanism and can justify the superiority of believing the reality of it then come back.Well, you are speaking about "our knowledge" while I am speaking about our attempts to understand the mechanism of formation of events. A mechanism that can lead us to another, perhaps even better mechanism, without random numbers at the start.
You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.So they are not "the reality" but our tools for calculating what does or may happen... and we err in forgetting this fact. (e.g. when we wonder about collapse (and the timing thereof) as if it were happening other than on paper or in the mind of the holder of the concept.)
What you say here is correct w.r.t. calculations yielding observable predictions. But the validity of a calculation does not imply the reality of the mathematical objects or processes used.You are missing the point. Everybody is calculating lot of things. And you too. There is nothing wrong with calculations. There is nothing wrong with solving differential equations - they are on paper or in the mind.
The point is whether at the end of your calculation you get something that you can compare with observations. In this respect there is no difference between solving differential equations and models with collapses. In each case at the end you get numbers or graphs that you can compare with experimental data.
So, your war is misdirected.