Very interesting (long !) post. I will reread it several times. But some quick comments.
seratend said:
This is where the problems arise:
In the case of continuous [random variable] probabilities, the probability of a point (the event is a set containing a single point) is null (e.g. the Lebesgue measure on |R).
Honestly, I don't think that's a problem. Think of the visible universe as lumped up in cubes with a 100th of the Planck length size. So there are only a finite number of physically distinguishable position states possible.
I had the impression that modern theories (superstrings, loop quantum gravity) only take into account a finite number of degrees anyways.
I don't think it is an issue.
To get the equivalent non-commuting observables of QM in classical probability, I must construct, for example (there are many other possibilities), new random variables that depend *explicitly* on the probability law of a given random variable.
Yes, but you will have to do non-local things...
In addition, the statistical/deterministic boundary becomes even fuzzier (and may be a nonsense) when we consider the result from the weak law of large numbers applied to a classical system of identical independent random variables. We get a global random variable system with a probability of 100% to be in the mean value of the independent random variable (this also works for independent identical QM observables). In a world of statistics, you recover what seems to be a deterministic world (from a set of statistical states, we recover what seems a deterministic state).
One has to be careful with this hammer of the weak law of large numbers. IF you accept that you're having probabilities, then, yes, the weak law of large numbers does make sense. However, it is no means to deduce probabilities if initially you didn't define a probability law!
It is exactly the fallacy Everett made, when he tried to prove that worldstates with non-Born statistics had Hilbert norms which became infinitesimally small "at the end of times". First of all, there was the "at the end of times" (infinite number of measurements), and next, there was a priori no reason to call the hilbert norms "probabilities" : that's exactly what he tried to prove !
Examples: we can connect, formally, the deterministic description of the classical Maxwell equations to independent random variables (e.g. using for example the electron model above for the charge distribution).
We can do even better: we can take the QM free Hamiltonian of the em field (photons). In addition, if we assume that we have an infinite countable number of photons all in the individual eigenstate |e> (the state of the whole system |E>=|e>|e>…|e>…), we have with a 100% confidence a static density energy of this field of e= sum_n e/N (where N is the total number of photons of this field, N -->+oO).
If |e> is an energy eigenstate, then it is even true for a single state :-)
How can you interpret your examples, if you accept the occurrence of such possibilities (100% confidence)?
I don't accept that (for the same reason I don't accept Everett's reasoning). First of all, you don't have an INFINITY of systems, but always a huge but finite one (watch out for black holes !). Second, once you have these finite systems, the states which do NOT have the right statistic (in a very significant way) will exist (even if they have very small Hilbert norms).
When you take the global system (the microscopic+ macroscopic measurement system), you seem to get a deterministic value, however if you are able to look at finite parts of this system you can only get random events. The things become even more complicated, if you admit that you can either see a deterministic event from the uncertainty of the measure (because we can only “see” the approximate random variable/observable that may always give the same value).
As I said, there exist states (if you allow for unitary evolution) which have a small but finite Hilbert norm, and which will give you significant deviations.
Remember my 2 state system. Even if the state of the individual 2-state systems is 0.0001|+> + 0.9999999|->, there's a finite norm for the branch in which an observer had |+++++++++++++++++++++++>.
Now, if you somehow could say that this norm WAS a probability, then we're in agreement that such a state is so improbable as that we can forget about it. However, if the Hilbert norm is NOT a probability, it is a component in the final state.
And the whole question is: what local physical process makes us decide that we can now call this hilbert norm a probability (and apply the Born rule).
Now, imagine one instant that the measurement part of a quantum system in a state |psi> generates an infinite number of photons to your classical human eye detector.
See, the story is already over. You assume that "seen by the human eye" is a classical system. Ok, you fixed the place of the Born rule application. In fact, it is not very far from where I fix it (only a few cm to the back :-)
But the problem is: if you are now going into detail, and you look at the physics of the human eye, you will see certain molecules interact with photons, through the EM interaction. Well, this interaction is described by UNITARY transformations.
=> P_|i>|psi>=ci|ei>|i> (probability of getting this outcome, i.e. the state |ei>|i> is “true”, is given by |ci|² for a state |psi>, note also that outcomes |ei>|i> are mutually exclusive). Because, we are working with an orthogonal basis (|i>), we can’t have during a trial with the state |psi>, more than one elementary measurement apparatus triggering outcome.
Again, you now applied the Born rule by your apparatusses.
But if you analyse them, they too consist of atoms and EM fields, and all this evolves in a unitary way.
So what will come out of it will be a big entanglement, if you apply rigorously your quantum theory. You will get AT THE SAME TIME a superposition of an infinity of red photons, of blue photons etc... in exactly that superposition which gave you the initial |psi>.
Therefore, at the vicinity of your eye, you will always get an infinite number of free [independent] photons outcome coming from just one of the elementary measurement apparatuses for each trial with the state |psi>.
If somehow, you could explain me WHY we have to do away with our matter-EM interaction hamiltonian and its associated unitary evolution, and apply the Born rule in this apparatus...
BTW, it escapes me why you send an infinity of photons. In principle, one would be enough, if it is in an energy state: I will measure with certainty its energy, no ?
Note that *all* of those processes used in this measurement experiment are statistical. However, we get what seems to be a deterministic result. We have not used any collapse
You did, when you said that only one apparatus could fire.
The other point I want to underline in order to improve our mutual understanding is the separation of the time evolution of statistics from the statistics themselves. Your example on the classical microscopic system states is perfect.
Both in QM or classical probabilities, the “formal” statistics do not refer to time. If we use the time label, we are just giving a collection of different statistics: law(ti)=(ti,law_i), where law_i describes a time independent statistics/probability law. Therefore, we can formally, separate the time evolution problem from the statistical results (a bit more complicated in relativity, left for later ;). The main advantage of this approach resides in the fact that we can say that at a give time, a QM state and a given observable is formally equivalent to a random variable and a probability law.
I would then like to point out that, of course, IF you apply the Born rule, that all the statistics that come out, of what you did as experiments in the past, can of course be described by a formal statistical system. After all, that's how people use quantum theory in practice.
The point I was trying to make was something different (I think): it is that we don't know of any local physical process that makes us apply the Born rule! There must be something in nature that makes us apply it, and apparently it is not through EM, weak or strong interactions ... and if superstring theory is correct, not by gravity either.
What we know already is that if that rule IS applied somewhere, THEN we may apply it with macroscopic measurement apparatus... except in EPR kinds of cases where the macroscopic entanglement CAN interfere at the moment of observation of the correlations.
But of course, you can always set up a formal probabilistic system that will spit out the correct statistics (call it a physicist, who knows quantum mechanics and applies the Born rule in his calculations when he writes up his paper!). The problem I have is not with the probabilistic nature, nor about the fact that this probabilistic nature gives rise to "deterministic" expectation values (statistics) if we observe an infinity of systems.
It is in the PHYSICS that I have a problem. I know all interactions. I know perfectly well how they give rise to unitary evolution. But "at the end of my paper, I have to apply the Born rule". Where is the physics of the Born rule ?
cheers,
Patrick.