arkajad said:
What I mean is that collapse is governed by a stochastic process. Much like radioactive decay but not that simple.
Ok I follow you there, but you still avoid the foundational issue. We can speak of the mechanism of decay, and speak of decay times because decay is an observable process.
---"look, the atom is still there at "
---"look, 3 minutes and 18 seconds later my gamma detector went 'ping'!"
With decay you can see the event (or extrapolate back from the speed of the decay products) and you can then do statistics, establish a distribution on decay times, and measure a half-life and from that information postulate a mechanism for decay.
With collapse the physical process is that a measurement is made. You can model the measurement process, express the composite of measuring device and system in a larger context and let it evolve until the description is one of an array of recorded outcomes with corresponding probabilities. Note that you MUST use a density operator formulation here both because of the entanglement between system and measuring device and because the measurement process is thermodynamic in a fundamental way. That is where one sees decoherence. Now you still haven't collapsed the wave-function or rather the density operator until you make a specific assertion:
that a specific outcome was made. Collapse is a conceptual process not a physical one and thus the "time it takes" is the time it take one to think it or write it down.
And it should be apparent in this investigation of the measurement process that the things we are writing down are in the end a classical probability distribution and thus at the beginning also was a probability distribution (though by use of a more general method of representation not wholly classical.) It is a representation of our knowledge about how the system or meta-system may behave and not a representation of its physical state.
As far as the time for the decoherence implicit in the measurement process, that is arbitrary. We can make the same measurement (and represent it with the same operator) with many specific laboratory configurations provided each configuration ultimately records the same observable for the system being measured. The decoherence process could be set up to take microseconds or weeks, as we choose, and when and where the decoherence occurs is also relative to how we set up the meta-description of the system + measuring device, e.g. how far out we put our meta-system meta-episystem cut.
As far as measuring the system the details of this meta description is irrelevant. As far as trying to understand measurement and collapse in terms of a model of reality goes, one falls into an infinite regress of measuring devices to observe the measuring devices to observe the measuring devices
et cetera ad infinitum.
It is like trying to speak of absolute position. Coordinates only give the position of a system relative to the observer. You can then try to speak of the observers position relative to another observer and you quickly see the futility of it and appreciate the fact that position is always relative. Not meaningless but like electrical potential only meaningful as a difference in values.
Observable is what can be observed. For instance a dot on a screen. Then the question is what is the mechanism governing the appearance of such dots in time and space? Because they appear at a certain time and at a certain place. How is the time of appearance and the place of appearance decided? The answer is simple: by a specific stochastic process that is determined by both, the wave function and the screen itself.
Don't confuse the dynamics of that dot e.g. the mechanism for electron evolution with its collapse. Look at how you use the wave function in describing that dot. Or more precisely since presumably you're speaking of a dot on a CRT screen you have a thermal source (hot cathode) you need to rather use a density matrix.
You do not see the collapse, you do not see the wave function, but you can see the dot. Dots are you data. Wave functions and collapses are the auxiliary concepts that are needed in order to explain the emergence of these data.
They are auxiliary concepts applying to the prediction of outcomes. They "explain" in so far as they do by expressing maximal prediction. Explanation as you seem to want it to mean would involve breaking the phenomenon down into component phenomena, e.g. florescence of the screen, emission of electrons from a hot cathode, propagation through the intermediate e-m field or array of slits and pinholes etc. But in the end each of these component processes must first and foremost be predictably described so that the reductive explanation of the dot makes sense. Then to further explain you must reduce these components. What comes first in this chicken and egg chaise is key. Classically we stop at a large enough scale that we can refer to an idealization of state we call reality. Quantum mechanics begins with the measurement process as an irreducible phenomenon. As such we begin with prediction and predictive description and
not with reality representation.
There is a good reason for this and it is that we can express the features of a reality representation within the scope of predictive description but not (as we see in QM) the reverse. Those features are specifically that of an underlying deterministic model. But falsify the hypothesis that such a model is possible and we still have our predictive description. The predictive language of QM is more general than the representative language of CM which is why it can express both quantum and classical phenomena and does so often at the same time as with the decoherence of the system+measuring device.
Within that predictive language the collapse of either psi or rho (on paper) is when you jump from:
"I'm considering a process by which my quantum propagates from a specific source, I use description rho (or psi). It propagates to a measuring device and according to theory the outcomes of my measurements will have the following probabilities P(X=1)=bla, P(X=2) = bla bla,"
to
"I'm now considering the case where we actually observe X=1 so let's update the description so we can calculate future probabilities!"
This is how the wave functions and density operators are actually used in practice, to represent classes of actualized quantum systems. All you have in the lab are a sequence of measurements or similar events, i.e. the recorded data. You cannot be like the zoologist and pull out the preserved specimen to show what you found and double check its features.
The wave-function and density operator both, are analogous to the zoologist's category of species, and not analogous to the DNA record of a single specimen.
When teaching my probability and statistics class I had the students guess the probability that at least two of the class (of 45) had birthdays on the same day. Then I showed them the calculation for that probability and it was much higher than most guessed, (about 94%).
We then went around the room declaring birthdays and sure enough we actually had 2 pairs match up. I then asked the question again... what is the probability that at least two students have the same birthday. One said 90% then quickly corrected himself, 100%!
I asked them then was my calculation wrong? We haven't changed who is in the room?
I did this to emphasize to them the nature of logical classes as opposed to sets. Probabilistic statements are statements about classes of possible outcomes and thus classes of systems, not single instances. By calculating the probability for my room of students I was identifying them as an instance of a particular class and knowing that class I could express a prediction about the actual instance.
Wave-functions are expressions of how a given instance of a quantum system might behave given you know it to be a member of a specific class of systems via the fact that a particular measurement has been made and the value of that measurement is specified.
Thus for example a given momentum eigen-"state" and spin "state" for an electron expresses the class of electrons for which the specified values have been measured. In the momentum-spin representation you have a little dirac delta function centered at the measured momentum in a tensor product with a specific spin "ket". You can write the "wave-function" in that you expand this Hilbert space vector in terms of components of position eigen-states and that representation is useful in that is explicitly gives the (square roots of) the probabilities of subsequent position measurements. Theory tells us how P and X relate and thus that this representation is a sinusoidal curve with a specific wave-length (h/p).
But the electron is not a wave-function. The road is not a line on a map. It is an analogue and to understand the type of analogue you must look at how the map is used. In the road case the map is a direct analogue, a model of the reality of the road. In the wave-function case we look at what we do with the wave function. We use it to calculate probabilities for position measurements, it is a logical analogue not a physical one... or rather it is first and foremost a logical analogue.
You may assert it is also a physical one but you must prove your case for that. I assert that wave-function collapse is a specific indicator that it is
not a physical analogue but purely a logical/predictive one since it is the logic of updating our class of systems that instigates our collapsing the wave-function on paper.
I understand the temptation to say "an electron is both wave and particle" but one is mapping the quantum electon's behavior into two distinctly classical phenomena, classical waves and classical particles. It is the spectrum of behaviors one is addressing and we see in this "either or" business the relativity of the actual classical representation. It is the necessary relativity of the "reality" one is trying to paint for the electron. The electron is not the sinusoidal wave nor the dirac delta-function particle... it is a phenomenon of actualizable measurements which we can probabilistically predict using wave functions (and/or density operators) as representations of interrelated probabilities.