akhmeteli said:
Then maybe you are drawing a distinction that is too fine for me:-). Indeed, your rephrasing of their phrase can be successfully applied to my statement about Euclidian geometry:-) Until you have an actual geometry in your possession, you can also argue that a theory “making use of both loopholes would be very contrived-looking”.
Well, here I suppose I must appeal to mathematical and physical intuitions--I don't in fact think it's plausible that a smart mathematician living in the days before Euclidean and non-Euclidean geometry would believe the fact that a quadrangle on a plane and a triangle on a sphere have angles adding up to other than 180 should imply that only a "contrived" theory of geometry would agree with the conjecture that triangles in a plane have angles that sum to 180. In contrast, I think lots of very smart physicists would agree with the intuition that a local realist theory consistent with all past experiments but which predicted no Bell inequality violation in ideal loophole-free experiments would have to be rather "contrived". Perhaps one reason for this is that we know what is required to exploit each loophole individually--exploiting the detector efficiency loophole requires that in some pairs of particles, one of the pair has a hidden variable that makes it impossible to detect (see billschnieder's example in posts #113 and #115 on
this thread), whereas exploiting the locality loophole requires that whichever member of the pair is detected first will send out some sort of signal containing information about what detector setting was used, a signal which causes the other particle to change its own hidden variables in just the right way as to give statistics that agree with QM predictions. Does your model contain both such features?
JesseM said:
You addressed it by suggested your own model was non-contrived, but didn't give a clear answer to my question about whether it can actually give statistical predictions about experiments so far like the Innsbruck experiment and the NIST experiment
akhmeteli said:
I did not give you a clear answer because I don’t have it and don’t know how to obtain it within a reasonable time frame.
OK, but then when I said
"still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment", why did you respond by saying (in post #579)
"I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"? After all, the question of whether your model is "contrived" is only relevant to my own statement if in fact your model can "get correct predictions ... for those experiments that have already been performed". If you don't yet know whether your model does this, then you can't offer it as a counterexample to the claim that any model that
did do it would have to be very contrived.
akhmeteli said:
You want me to emulate the above experiments in “my” model.
Yes, that would be needed to show that you have a model that's a counterexample to the "contrived" claim. And even if you can't yet apply your model to existing experiments in all their precise details, you could at least start by seeing what it predicts about some simplified Aspect-type experiment that closes the locality loophole but not the detector efficiency loophole, and another simplified experiment that closes the efficiency loophole but not the locality loophole, and see if it predicts Bell inequality violations here. As an even more basic step, you could just explain whether it has the two features I noted above: 1) hidden variables which ensure that some particles aren't detected, no matter how good the detectors, and 2) some sort of signal from the first measured particle that contains information about the detector setting, and a way for the other particle to alter its own hidden variables in response to this signal.
akhmeteli said:
Therefore, so far my reasoning is different. Let me ask you this: if I offered a model that would have the same unitary evolution as quantum electrodynamics, not just “a” quantum field theory, would that suggest that the actual results of past experiments may be successfully emulated in this model? I’ll proceed (or not, depending on your answer) when I have your answer.
Unitary evolution only predicts complex amplitudes, not real-valued probabilities. If you have some model that predicts actual statistics in a local way, and whose predictions agree with those of unitary evolution + the Born rule, then say so--but of course unitary evolution + the Born rule predicts violations of Bell inequalities even in loophole-free experiments, and you said earlier that you
weren't claiming your model could give BI violations even in loophole-free experiments. So your claims about your model are rather confusing to say the least.
akhmeteli said:
As I said, the model gives predictions for probabilities the same way Bohmian mechanics does that
When did you say that?
akhmeteli said:
– you yourself described the relevant procedure.
I don't remember describing a procedure for getting probabilities in Bohmian mechanics, what post are you talking about? Bohmian mechanics treats the position variable as special, its equations saying that particles have a well-defined position at all times, and measurement results all depend on position in a fairly straightforward way (for example, spin measurements can be understood in terms of whether a particle is deflected to a higher position or a lower position by a Stern-Gerlach apparatus). The equations for particle behavior are deterministic, but for every initial quantum state Bohmian mechanics posits an ensemble of possible hidden-variable states compatible with that measured quantum state, so probabilities are derived by assuming each hidden state in the ensemble is equally probable (this is analogous to classical statistical mechanics, where we consider the set of possible
'microstates' compatible with a given observed 'macrostate' and treat each microstate as equally probable). Does all of this also describe how predictions about probabilities are derived in your model? If not, where does the procedure in your model differ?
akhmeteli said:
So let me ask you another question: do you think that Bohmian mechanics offers expressions for probabilities? If yes, then how “my” model is different from Bohmian mechanics that it cannot give expressions for probabilities?
I'll answer that question based on your answer to my questions above.
JesseM said:
(you may be able to derive probabilities from amplitudes using many-worlds type arguments, but as I said part of the meaning of 'local realism' is that each measurement yields a unique outcome)
akhmeteli said:
Again, as I said, “local realism” does not necessarily require that “each measurement yields a unique outcome” (see also below), and I don’t need any “many-worlds type arguments”.
I think you may be misunderstanding what I mean by "unique outcome". Suppose the experimenter has decided that if he sees the result "spin-up" on a certain measurement he will kill himself, but if he sees the result "spin-down" he will not. Are you saying that at some specific time shortly after the the experiment, there may not be a unique truth about whether the experimenter is alive or dead at that time? If you do think there should be a unique truth, then that implies you do think that "each measurement yields a unique outcome" in the sense I meant. If you don't think there is a unique truth, then isn't this by definition a "many-world type argument" since you are positing multiple "versions" of the same experimenter?
JesseM said:
Suppose we do a Wigner's friend type thought-experiment where we imagine a small quantum system that's first measured by an experimenter in an isolated box, and from our point of view this just causes the experimenter to become entangled with the system rather than any collapse occurring. Then we open the box and measure both the system and the record of the previous measurement taken by the experimenter who was inside, and we model this second measurement as collapsing the wavefunction. If the two measurements on the small system were of a type that according to the projection postulate should yield a time-independent eigenstate, are you claiming that in this situation where we model the first measurement as just creating entanglement rather than collapsing the wavefunction, there is some nonzero possibility that the second measurement will find that the record of the first measurement will be of a different state than the one we find on the second measurement? I'm not sure but I don't think that would be the case--even if we assume unitary evolution, as long as there is some record of previous measurements then the statistics seen when comparing the records to the current measurement should be the same as the statistics you'd have if you assumed the earlier measurements (the ones which resulted in the records) collapsed the wavefunction of the system being measured according to the projection postulate.
akhmeteli said:
Sorry, JesseM, I cannot accept this argument. The reason is as follows. If you take unitary evolution seriously (and I suspect you do), then you may agree that unitary evolution does not allow irreversibility, so, strictly speaking, no “record” can be permanent, so a magnetic domain on a hard disk can flip, furthermore, ink in a lab log can disappear, however crazy that may sound.
I agree, but I think you misunderstand my point. Any comparison of the predictions of the "standard pragmatic recipe" with another interpretation like the MWI's endless unitary evolution must be done at some particular time--what happens in the future of that time doesn't affect the comparison! My point is that if we consider
any series of experiments done in some finite time window ending at time T1, and at T1 we look at all records existing at that time in order to find the statistics, then both of the following two procedures should yield the same predictions about these statistics:
1) Assume that unitary evolution applied until the very end of the window, so any measurements before T1 simply created entanglement with no "wavefunction collapse", then take the quantum state at the very end and use the Born rule to see what statistics will be expected for all records at that time
2) Assume that for each measurement that left an (error-free) record which survived until T1, that measurement did collapse the wavefunction according to the projection postulate, with unitary evolution holding in between each collapse, and see what predictions we get about the statistics at the end of this series of collapses-with-unitary-evolution-in-between.
Would you agree the predicted statistics would be the same regardless of which of these procedures we use? If you do agree, then I'd say that means the standard pragmatic recipe involving the projection postulate should work just fine for any of the types of experiments physicists typically do, including Aspect-type experiments. The only time the projection postulate may give incorrect statistical predictions about observations is if you treat some measurement as inducing a "collapse" even though the information about that measurement was later "erased" in a quantum sense (not just burning the records or something, which might make the information impossible to recover in practice but not necessarily in principle), but in any case the rules for using the projection postulate are not really spelled out and most physicists would understand that it wouldn't be appropriate in such a case.
akhmeteli said:
If you challenge that, you challenge unitary evolution, if you challenge unitary evolution, there’s little left of quantum theory. Furthermore, in our previous discussion, I argued that even death (we were talking abot Schroedinger’s cat), strictly speaking, cannot be permanent because of unitary evolution and the quantum recurrence theorem.
Quantum recurrence isn't really relevant, the question is just whether there was a unique truth about whether the cat was alive or dead at some specific time, not whether the cat may reappear in the distant future. As long as there is some record of whether the cat was alive or dead at time T it's fine for us to say there was a definite truth (relative to our 'world' at least), but if the records are thoroughly erased we can't say this.
akhmeteli said:
It is not so important for this thread how the “pragmatic recipe” is used in general, it is important how the projection postulate is used in the proof of the Bell theorem: it is supposed that as soon as you measure the spin projection of one particle, the spin projection of the other particle becomes definite immediately, according to the projection postulate. So the projection postulate is not "only" used here “at the very end of the complete experiment”, so you have highlighted an important point.
Well, see my point about the agreement in statistical predictions between method 1) and 2) above.