Local realism ruled out? (was: Photon entanglement and )

  • #651
RUTA said:
In the Single World, the predicted distribution is what each experimentalist should find and, indeed, our QM predictions match said distributions.
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with? If you don't disagree with any, perhaps you would indeed say it's impossible to "do science" in a universe where the number of civilizations is infinite (assuming there is indeed a random element to experiments that can't be eliminated with better experimental techniques, which would even be true in a deterministic hidden-variable model like Bohmian mechanics if it's impossible to measure/control the hidden variables). But I think this would be a pretty strange position to take, philosophically.
 
Physics news on Phys.org
  • #652
JesseM said:
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with? If you don't disagree with any, perhaps you would indeed say it's impossible to "do science" in a universe where the number of civilizations is infinite (assuming there is indeed a random element to experiments that can't be eliminated with better experimental techniques, which would even be true in a deterministic hidden-variable model like Bohmian mechanics if it's impossible to measure/control the hidden variables). But I think this would be a pretty strange position to take, philosophically.

You obtain results with an uncertainty in experimental physics, so you only need the result to agree with theory within a certain range (that's the source of statements having to do with "confidence level"). For an introductory paper on how QM statistics are obtained (they even supply the data so you can reproduce the results yourself) see: "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," Dietrich Dehlinger and M. W. Mitchell, Am. J. Phys. v70, Sep 2002, 903-910. Here is how they report their result in the abstract, for example:

"Bell’s idea of a hidden variable theory is presented by way of an example and compared to the quantum prediction. A test of the Clauser, Horne, Shimony, and Holt version of the Bell inequality finds S = 2.307 +/- 0.035, in clear contradiction of hidden variable theories. The experiments described can be performed in an afternoon."

According to your view, they can't say "in clear contradiction," but that's standard experimental physics. And, if you were right, we couldn't do experimental physics. Thankfully, you're wrong :-)
 
  • #653
RUTA said:
You obtain results with an uncertainty in experimental physics, so you only need the result to agree with theory within a certain range (that's the source of statements having to do with "confidence level").
Yes, and no matter how many trials you do, as long as the number is finite there is some small probability that your results will differ wildly from the "true" probabilities determined by the laws of QM. For example, if in a particular experiment the QM prediction is that there is a 25% chance of seeing a particular result, then even if the experiment is done perfectly and QM is a correct description of the laws of physics, and even if you did a huge number of trials, there is some nonzero probability you would get that particular result on more than 90% of all trials, due to nothing but a statistical fluctuation. If the number of trials is large enough the probability of such a large statistical fluctuation may be tiny--say, one in a googol--but as long as the number of trials is finite the probability is nonzero.

If continue to disagree with what I'm saying about the situation with the MWI being no worse than the situation with an infinite universe containing an infinite number of civilizations, I'd appreciate an answer to my question about what specific statement in the chain of argument you disagree with:
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with?
RUTA said:
For an introductory paper on how QM statistics are obtained (they even supply the data so you can reproduce the results yourself) see: "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," Dietrich Dehlinger and M. W. Mitchell, Am. J. Phys. v70, Sep 2002, 903-910. Here is how they report their result in the abstract, for example:

"Bell’s idea of a hidden variable theory is presented by way of an example and compared to the quantum prediction. A test of the Clauser, Horne, Shimony, and Holt version of the Bell inequality finds S = 2.307 +/- 0.035, in clear contradiction of hidden variable theories. The experiments described can be performed in an afternoon."
Presumably there was some confidence interval they used to get the error bars of +/- 0.035. For example, they might have calculated that the probability that S is greater than 2.307 + 0.035 or less than 2.307 - 0.035 is less than 5 sigma, or about an 0.00005% chance (perhaps based on considering a null hypothesis where S was outside of that range, and finding an 0.00005% that the null hypothesis would give a result of 2.307 in their experiment).
RUTA said:
According to your view, they can't say "in clear contradiction," but that's standard experimental physics.
Whatever gave you the idea that I would say they can't say "in clear contradiction"? If the probability of getting statistics that depart appreciably from the true probabilities is miniscule, then we can be very confident our results are close to the true probabilities. This is true in an infinite universe with an infinite number of civilizations (you haven't told me what you think about this scenario), and it's just as true in the MWI.
 
Last edited:
  • #654
JesseM said:
Yes, and no matter how many trials you do, as long as the number is finite there is some small probability that your results will differ wildly from the "true" probabilities determined by the laws of QM. For example, if in a particular experiment the QM prediction is that there is a 25% chance of seeing a particular result, then even if the experiment is done perfectly and QM is a correct description of the laws of physics, and even if you did a huge number of trials, there is some nonzero probability you would get that particular result on more than 90% of all trials, due to nothing but a statistical fluctuation. If the number of trials is large enough the probability of such a large statistical fluctuation may be tiny--say, one in a googol--but as long as the number of trials is finite the probability is nonzero.

If continue to disagree with what I'm saying about the situation with the MWI being no worse than the situation with an infinite universe containing an infinite number of civilizations, I'd appreciate an answer to my question about what specific statement in the chain of argument you disagree with:

I disagree with this statement:

"Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities."

In science, we expect every experiment to realize the proper distribution. You would never hear someone present aberrant data as "to be expected, based on the fact that an infinite number of civilizations are doing this very experiment." Most scientists would take this as a reductio against your particular interpretation of statistics in science.

Many Worlds is de facto in agreement with your interpretation. That's why I said (rhetorically), "Why would any scientist subscribe to Many Worlds?"
 
  • #655
RUTA said:
I disagree with this statement:

"Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities."

In science, we expect every experiment to realize the proper distribution.
Are you saying we expect every experiment to exactly realize the proper distribution? If the probability of detecting some result (say, spin-up) is predicted by QM to be 0.5, would you expect that a series of 100 trials would yield exactly 50 instances of that result?

I would say instead that in science, we recognize that, given a large enough number of trials, the probability is very tiny that the statistics will differ significantly from the proper distribution (the law of large numbers). This "very tiny" chance can be quantified precisely in statistics, and it is always nonzero for any finite number of trials. But with enough trials it may become so small we don't have to worry about it, say a 1 in 10^100 chance that the observed statistics differ from the true probabilities by more than some amount epsilon (and in that case, we should expect that 1 in 10^100 civilizations that do the same number of trials will indeed observe statistics that differ from the true probabilities by more than that amount epsilon). From a purely statistical point of view (ignoring what assumptions we might make pragmatically for the purposes of doing science), do you think what I say here is incorrect?
RUTA said:
You would never hear someone present aberrant data as "to be expected, based on the fact that an infinite number of civilizations are doing this very experiment."
No, but that's because it's extremely unlikely that our civilization would happen to be one of those that gets the aberrant result. That doesn't change the fact that in a universe with an infinite number of civilizations doing scientific experiments, any aberrant result will in fact occur occasionally.
RUTA said:
Most scientists would take this as a reductio against your particular interpretation of statistics in science.
I disagree, I think this is a rather idiosyncratic perspective that you hold. Most scientists would not say that an infinite universe with an infinite number of civilizations, a very small fraction of which will see aberrant results throughout their history due to random statistical fluctuations, presents any problem for normal science, because again it's vanishingly unlikely that we happen to be living in one of those unlucky civilizations.
 
  • #656
JesseM said:
Are you saying we expect every experiment to exactly realize the proper distribution? If the probability of detecting some result (say, spin-up) is predicted by QM to be 0.5, would you expect that a series of 100 trials would yield exactly 50 instances of that result?

I assume that's rhetorical.

JesseM said:
No, but that's because it's extremely unlikely that our civilization would happen to be one of those that gets the aberrant result. That doesn't change the fact that in a universe with an infinite number of civilizations doing scientific experiments, any aberrant result will in fact occur occasionally.

Just not here, right? Suppose X claims to have a source that produces 50% spin up and 50% spin down and X reports, "I have a 50-50 up-down source that keeps producing pure up results." If you REALLY believe that your interpretation of statistics in science is correct, then you would HAVE to admit that perhaps X is right. But, what will MOST scientists say? Of course, X is mistaken, he doesn't have a 50-50 source. Why? Because our theory is empirically driven, not the converse.

JesseM said:
I disagree, I think this is a rather idiosyncratic perspective that you hold. Most scientists would not say that an infinite universe with an infinite number of civilizations, a very small fraction of which will see aberrant results throughout their history due to random statistical fluctuations, presents any problem for normal science, because again it's vanishingly unlikely that we happen to be living in one of those unlucky civilizations.

If your view is correct, you could find me a paper published with aberrant results. Can you find me a published paper with claims akin to X supra? Why not? Because the weird stuff only happens in "other places?" Not here?
 
  • #657
RUTA said:
I assume that's rhetorical.
Yes, but a literal interpretation of your statement "In science, we expect every experiment to realize the proper distribution" would imply every experiment should yield precisely the correct statistics. I was illustrating that this statement doesn't really make any sense to me. If you didn't mean it in the literal sense that the observed statistics should precisely equal the correct probabilities, what did you mean?
RUTA said:
Just not here, right? Suppose X claims to have a source that produces 50% spin up and 50% spin down and X reports, "I have a 50-50 up-down source that keeps producing pure up results." If you REALLY believe that your interpretation of statistics in science is correct
Are you saying my statements are incorrect on a purely statistical level? If so, again, can you pinpoint which statement in the second paragraph of my last post is statistically incorrect?
RUTA said:
then you would HAVE to admit that perhaps X is right.
If "perhaps" just means "the probability is zero", then yes. But if I can show the chance X is right is astronomically small, say only a 1 in 10^100 chance that a 50-50 source would actually produce so many up results in a row, then on a pragmatic level I won't believe him. Do you deny that statistically, the probability of getting N heads in a row with a fair coin is always nonzero, though it may be astronomically small if N is very large? If not, do you deny that in an infinite universe with an infinite number of civilizations flipping fair coins, there will be some that do see N heads in a row? These aren't rhetorical questions, I am really having trouble what part of my argument, specifically, you object to.
RUTA said:
But, what will MOST scientists say? Of course, X is mistaken, he doesn't have a 50-50 source.
Of course, I'll say that too. Why wouldn't I? If it would require a statistical fluctuation with a probability of 1 in 10^100 for his theory to be right, then we can say his theory is wrong beyond all reasonable doubt, even if we can't have perfect philosophical certainty that his theory is wrong.
RUTA said:
If your view is correct,
My view of what? Again, on a purely statistical sense are any of my statements incorrect? If so, which ones?
RUTA said:
you could find me a paper published with aberrant results.
It depends what you mean by "aberrant". If you mean the sort of massive statistical fluctuation that probability theory would say has an astronomically small probability like 1 in 10^40 or whatever, then this is much larger than the number of scientific experiments that have been done in human history so I wouldn't expect any such aberrant results. If you just mean papers with good experimental design found some result to a confidence of two sigma or something, but later experiments showed the result was incorrect, I don't think it'd be that hard to find such a paper.
 
  • #658
JesseM said:
Yes, but a literal interpretation of your statement "In science, we expect every experiment to realize the proper distribution" would imply every experiment should yield precisely the correct statistics.

Your argument does not make any sense to me so I am hoping you could clarify your understanding of the meaning of "probability". If a source has a 0.1-0.9 up-down probability, what does that mean in your understanding according to MWI. Does it mean 10% of the worlds will obtain 100% up and 90% of the worlds will obtain 100% down, or does it mean in every world, there will be 10% up and 90% down? It is not clear from your statements what you mean and what it has that got to do with "correct statistics"?

If I calculate the probability that the sun will explode tomorrow to be 0.00001, what does that mean in MWI. Is your understanding that I am calculating the probability of the sun in "my" world, exploding, or all the suns in the multiverse exploding or what exactly? Or do you think such a probability result does not make sense in science.

I think after attempting to respond to this issues you may appreciate why many wonder "Why would any scientist subscribe to Many Worlds?"
 
  • #659
JesseM said:
But there are two aspects of this question--the first is whether local realism can be ruled out given experiments done so far, the second is whether local realism is consistent with the statistics predicted theoretically by QM. Even if you don't use the projection postulate to generate predictions about statistics, you need some real-valued probabilities for different outcomes, you can't use complex amplitudes alone since those are never directly measured empirically. And if we understand local realism to include the condition that each measurement has a unique outcome, then it is impossible to get these real-valued statistics from a local realist model.
But I certainly don’t “understand local realism to include the condition that each measurement has a unique outcome”, not necessarily. You may believe that my understanding of local realism is not reasonable, but you may agree that “my” model is local realistic within common understanding of this term. I already said that you can define probability density in the model using the expression for the charge density.
JesseM said:
No idea where you got the idea that I would be talking about "approximate" locality from anything in my posts. I was just talking about QM being a "pragmatic" recipe for generating statistical predictions, I didn't say that Bell's theorem and the definition of local realism were approximate or pragmatic. Remember, Bell's theorem is about any black-box experiment where two experimenters at a spacelike separation each have a random choice of detector setting, and each measurement must yield one of two binary results--nothing about the proof specifically assumes they are measuring anything "quantum", they might be choosing to ask one of three questions with yes-or-no answers to a messenger sent to them or something. Bell's theorem proves that according to local realism, any experiment of this type must obey some Bell inequalities. So then if you want to show that QM is incompatible with local realism, the only aspect of QM you should be interested in is its statistical predictions about some experiment of this type, all other theoretical aspects of QM are completely irrelevant to you. Unless you claim that the "pragmatic recipe" I described would actually make different statistical predictions about this type of experiment than some other interpretation of QM like Bohmian mechanics or the many-worlds-interpretation, then it's pointless to quibble with the pragmatic recipe in this context.
I don’t quite get it. First off, I concede that the Bell inequalities cannot be violated in local realistic theories. I don’t question this part of the Bell theorem. The second part of the Bell theorem states that the inequalities can be violated in QM. I don’t question the derivation of this statement, but I insist that its assumptions are mutually contradictory, making this statement questionable. You tell me that measurements typically involve environmental decoherence. I read the following implication from that (may be I was wrong): so there is no contradiction between unitary evolution (UI) and the projection postulate (PP). If you say that the difference between UI and PP has its root in environmental decoherence, I don’t have problems with that, but that does not eliminate the difference, or contradiction, between them. What I tried to emphasize, is you cannot declare this decoherence or any other root cause of the contradiction negligible, you cannot use any approximations to rule out local realism.
JesseM said:
But that won't produce a local realist theory where each measurement has a unique outcome. Suppose you have two separate computers, one modeling the amplitudes for various measurements which could be performed in the local region of one simulated experimenter "Alice", another modeling the amplitudes for various measurements which could be performed in the local region of another simulated experimenter "Bob", with the understanding that these amplitudes concerned measurements on a pair of entangled particles that were sent to Alice and Bob (who make their measurements at a spacelike separation). If you want to simulate Alice and Bob making actual measurements, and you must assume that each measurement yields a unique outcome (i.e. Alice and Bob don't each split into multiple copies as in the toy model I linked to at the end of my last post), then if the computers running the simulation are cut off from communicating with one another and neither computer knows in advance what measurement will be performed by the simulated experimenter on the other computer, then there is no way that such a simulation can yield the same Bell-inequality-violating statistics predicted by QM, even if you program the Born rule into each computer to convert amplitudes into probabilities which are used to generate the simulated outcome of each measurement. Do you disagree that there is no way to get the correct statistics predicted by any interpretation of QM in a setup like this where the computers simulating each experimenter are cut off from communicating? (which corresponds to the locality condition that events in regions with a spacelike separation can have no causal effect on one another)
Again, I don’t need unique outcomes – no measurement is final
JesseM said:
The problem is that there is no agreement on how the many-worlds interpretation can be used to derive any probabilities. If we're not convinced it can do so then we might not view it as being a full "interpretation" of QM yet, rather it'd be more like an incomplete idea for how one might go about constructing an interpretation of QM in which measurement just caused the measuring-system to become entangled with the system being measured.
Well, I don’t know much about many worlds, but anyway – it seems this problem does not prevent you from favoring many worlds.
JesseM said:
See my comments above about the Wigner's friend type thought experiment. I am not convinced that you can actually find a situation where a series of measurements are made that each yield records of the result, such that using the projection postulate for each measurement gives different statistical predictions then if we just treat this as a giant entangled system which evolves in a unitary way, and then at the very end use the Born rule to find statistical expectations for the state of all the records of prior measurements. And as I said there as well, the projection postulate does not actually specify whether in a situation like this you should treat each successive measurement as collapsing the wavefunction onto an eigenstate or whether you should save the "projection" for the very last measurement.
I already said, first, that I disagree with your reading of this experiment, second, it is important how the projection postulate is used to prove violations in QM.


JesseM said:
I wasn't guessing what he said, I was guessing what he meant by what he said. What he said was only the very short statement "Yes, it is an approximation. However, due to decoherence, this is an extremely good approximation. Essentially, this approximation is as good as the second law of thermodynamics is a good approximation." I think this statement is compatible with my interpretation of what he may have meant, namely "in Bohmian mechanics the collapse is not 'real' (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield." Nowhere did he say that using the projection postulate will yield different statistical predictions about observed results than those predicted by Bohmian mechanics.

If it’s an approximation, it is not precise, if it is not precise, there must be difference.

JesseM said:
I think they are different only if you assume multiple successive measurements, and understanding "the projection postulate" to imply that each measurement collapses the wavefunction onto an eigenstate, and assuming that for some of the measurements the records of the results are "erased" so that it cannot be known later what the earlier result was. If you are dealing with a situation where none of the measurement records are erased, I'm pretty sure that the statistics for the measurement results you get using the projection postulate will be exactly the same as the statistics you get if you model the whole thing as a giant entangled system and then use the Born rule at the very end to find the probabilities of different combinations of recorded measurement results. And once again, the "projections postulate" does not precisely define when projection should occur anyway, you are free to interpret the projection postulate to mean that only the final measurement of the records at the end of the entire experiment actually collapses the wavefunction.

I don’t quite see what the status of all these statements is. Anyway, I don’t see any reason to agree with them until they are substantiated. substantiation.
 
  • #660
JesseM said:
(continued from previous post)
I think you misunderstood what I meant by "any" above, I wasn't asking if your model could reproduce any arbitrary prediction made by the "standard pragmatic recipe" (i.e. whether it would agree with the standard pragmatic recipe in every possible case, as I think Bohmian mechanics does). Rather, I was using "any" in the same sense as it's used in the question priests used to ask at weddings, "If any person can show just cause why they may not be joined together, let them speak now or forever hold their peace"--in other words, I was asking if there was even a single instance of a case where your model reproduces the probabilistic predictions of standard QM, or whether your model only deals with complex amplitudes that result from unitary evolution.

I got that about "any" the first time:-) Probabilities can be introduced in "my" model using the expression for current density, the same way it is done in the Bohm interpretation - so it's pretty much the Born rule, but again, it should be used just as an operational rule.


JesseM said:
The reason I asked this is that the statement of yours I was responding to was rather ambiguous on this point:

If your model does predict actual measurement results, then if the model was applied to an experiment intended to test some Bell inequality, would it in fact predict an apparent violation of the inequalites in both experiments where the locality loophole was closed but not the detector efficiency loophole, and in experiments where the efficiency loophole was closed but not the locality loophole?

I hope and think so, but I am not sure - as I said, I am not sure to what extent it describes experimental results correctly.

JesseM said:
I think you said your model would not predict violations of Bell inequalities in experiments with all loopholes closed--would you agree that if we model such experiments using unitary evolution plus the Born rule (perhaps applied to the records at the very end of the full experiment, after many trials had been performed, so we don't have to worry about whether applying the Born rule means we have to invoke the projection postulate), then we will predict violations of Bell inequalities even in loophole-free experiments?

I am not sure - you need correlations, so you need to use the Born rule twice in each event, and this is pretty much equivalent to the projection postulate. You said very well (I hope I understood you correctly) that the Born rule should be applied at the end of each experiment - that means, I think, you cannot use it twice in each experiment.

JesseM said:
Likewise, would you agree that Bohmian mechanics also predicts violations in loophole-free experiments, and many-worlds advocates would expect the same prediction even if there is disagreement on how to derive it?

I have nothing to say about many worlds, and I am not sure about Bohmian mechanics - Demystifier said that it does predict violations in ideal experiments, but then it seemed he was less categorical about that (see his post 303 in this thread). So I don't know. My guess you cannot prove violations in Bohmian mechanics using just unitary evolution, otherwise the relevant proof could be "translated" into a proof in standard QM.
 
  • #661
DrChinese said:
Disagree, as we have already been through this many times. There is nothing BUT evidence of violation of Bell Inequalities. To use a variation on your 34 year old virgin example:

Prosecutor: "We found the suspect over the victim, holding the murder weapon. The victim's last words identified the suspect as the perp. The murder weapon was recently purchased by the suspect, and there are witnesses who testified that the suspect planned to use it to kill the victim." Ah, says the defense attorney, but where is the photographic evidence of the crime itself? This failure is proof of the suspect's innocence!

The problem is you could write an equally winning speech for the prosecutor, proving that the sum of the angles of a planar triangle is not 180 degrees.

There is another thing. There is a huge difference between " beyond reasonable doubt" in court and in science. You know that DNA testing led to acquittal of maybe hundreds people or more. It's not so different to imprison or execute an innocent. I even heard that prosecutors try to exclude mathematicians from their future juries because mathematicians' requirement for "beyond any doubt" is much stricter than that of the nation on the average (whether this is true or not is not important, it's a good illustration anyway).

I'd say there is some sound reason between this difference: crime is not reproducible, and science is supposed to be. However, 46 years of looking for violations of the genuine inequalities have demonstrated no such violations.

The difference is especially clear in this case, as elimination of local realism is an extremely radical idea, so the burden of proof is very high.

DrChinese said:
You can always demand one more nail in the coffin. In fact, it is good science to seek it. But the extra nail does not change it from "no experimental evidence" (as you claim) to "experimental evidence". It changes it from "overwhelming experimental evidence" (my claim) to "even more overwhelming experimental evidence".

I fail to see how total absence of violations of the genuine Bell inequalities can serve as "overwhelming experimental evidence" of such violations, but obviously you have no such problems.


DrChinese said:
As to the second of your assertions: how QM arrives at its predictions may be "inconsistent" in your book. But it does not cause a local realistic theory to be any more valid. If QM is wrong, so be it. That does not change the fact that all local realistic theories are excluded experimentally.

All local realistic theories can only be ruled out by a demonstration of violation of the genuine Bell inequalities, period. Sorry to disappoint you, but no such demonstration available. Furthermore, the proof of such violations in quantum theory requires mutually contradicting assumptions. Therefore, violations of the Bell inequalities are on shaky grounds, to put it mildly, both theoretically and experimentally.
 
  • #662
GeorgCantor said:
Do you know of a totally 100% loophole-free experiement from anywhere in the universe?

I can just repeat what I said several times: for some mysterious reason, Shimony is not quite happy about experimental demonstration of violations, Zeilinger is not quite happy... You are quite happy with it? I am happy for you. But that's no reason for me to be happy about that demonstration. Again, the burden of proof is extremely high for such radical ideas as elimination of local realism.
 
  • #663
DrChinese said:
Great point. So there is no evidence for GR either. :biggrin:

There is another issue with akhmeteli's line of reasoning IF CORRECT: there is a currently unknown local force which connects Alice and Bob. This kicks in on Bell tests like Rowe et al which closes the detection loophole. But not otherwise as far as we know.

There is also a strong bias - also previously unknown and otherwise undetected - which causes an unrepresentative sample of entangled pairs to be detected. This kicks in on Bell tests such as Weihs et al, which closes the locality loophole. Interestingly, this special bias does NOT appear when all pairs are considered such as in Rowe, however, the effect of the unknown local force is exactly identical. What a happy coincidence!

And so on for every loophole when closed individually. All the loopholes have exactly the same effect at every angle setting! And if you leave 2 open instead of 1, you also get the same effect! (I.e. if you leave the locality and detection loopholes open simultaneously, the effect is the same as either one individually.)

Great sample of eloquence and logic. Again, just a tiny problem: it's no sweat to rewrite your post to prove that the sum of the angles of a planar triangle is not 180 degrees. But maybe it isn't?

DrChinese said:
Stangely, the entanglement effect (remember that this is just a coincidence per Local Realism) completely disappears if you learn the values of Alice and Bob. Just as QM predicts, but surprisingly, quite contrary to the ideals of Local Realism. After all, EPR thought that you could beat the HUP with entangled particle pairs, and yet you can't!

I don't challenge HUP at all.

DrChinese said:
So to summarize: akhmeteli is essentially asserting that a) 2 previously unknown and otherwise undetected effects exist (accounting for the loopholes); b) these effects are not only exactly equal to each other but are also equal to their combined impact; and c) an expected ability to beat the HUP (per EPR's local realism) has not materialized.

See above
 
  • #664
DrChinese said:
You apparently cannot, as you put forth your opinions as fact. Further, you apparently cannot tell the difference between ad hoc speculation and evidence based opinions. To reasonable people, there is a difference.

I cannot comment on something lacking any specifics.

DrChinese said:
There is a huge difference in your speculation on loopholes (notice how you cannot model the behavior of these despite your unsupported claims) and RUTA's opinions (which he can model nicely using both standard and original science).

Again, how about specifics? What unsupported claims, exactly? I did not claim I can model loopholes within a reasonable time frame.
 
  • #665
DrChinese said:
So basically, this version is useless as is (since it cannot predict anything new and cannot explain existing well); but you want us to accept that a future version might be valuable. That may be reasonable, I can see the concept and that is certainly a starting point for some good ideas. But it is a far cry to go from here to saying your point is really made.

What point related to the model I claimed I made and in fact did not?

You call my model useless. I respectfully disagree. Irrespective of any interpretation of quantum theory, it adds some rigorous, and therefore valuable, results to mathematical physics, for example, it demonstrates a surprising and simple result: matter field can be naturally eliminated from scalar electrodynamics.

No, I don't have time to "explain existing well" using the model, but the model does not belong to me anymore, so those who wish can find out whether it's good or bad at explaining. I think the model adds a meaningful and specific material for discussions of interpretation of quantum theory. Anybody can use it to support their own points or question other people's points. For example, it can be used to analyze such no-go theorems as the Bell theorem. For example, it shows that not all quantum field theories are "non-local-realistic". I guess this is a new and interesting result, no matter what interpretation you favor.

No, the model perhaps cannot predict anything new. However, if it had the same unitary evolution as quantum electrodynamics, rather than "a" quantum field theory, it would be much more valuable, although in that case it would certainly could not predict anything new. Therefore, the inability to predict something new may be the least of the model's problem.

DrChinese said:
Santos, Hess and many others have gone down similar paths with similar arguments for years. Where did they end up?

For example, according to you, Santos "ended up" "convincing a few good people that "all loopholes should be closed simultaneously"", you call it a "questionable conclusion", I see that a genuine contribution to our body of knowledge.

DrChinese said:
Please keep in mind that you should not expect to post speculative ideas in this forum with impunity. This forum is for generally accepted science.

What speculative ideas? Until recently, practically everything I said was published in peer-reviewed journals by others. Now that my article was accepted for publication, I just added a discussion of some of my mathematical results from that article.

With all due respect, I believe you post speculative ideas in this forum with impunity when you question the mainstream fact (not opinion) that there have been no experimental demonstrations of the genuine Bell inequalities.
 
  • #666
JesseM said:
Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities.

But this doesn’t make sense, does it? If there’s 1 in 10^100 MWI civilizations...

JesseM said:
I would say instead that in science, we recognize that, given a large enough number of trials, the probability is very tiny that the statistics will differ significantly from the proper distribution (the law of large numbers). This "very tiny" chance can be quantified precisely in statistics, and it is always nonzero for any finite number of trials. But with enough trials it may become so small we don't have to worry about it, say a 1 in 10^100 chance that the observed statistics differ from the true probabilities by more than some amount epsilon (and in that case, we should expect that 1 in 10^100 civilizations that do the same number of trials will indeed observe statistics that differ from the true probabilities by more than that amount epsilon). From a purely statistical point of view (ignoring what assumptions we might make pragmatically for the purposes of doing science), do you think what I say here is incorrect?

That gets 'unlucky' aberrant result...

JesseM said:
... it's extremely unlikely that our civilization would happen to be one of those that gets the aberrant result. That doesn't change the fact that in a universe with an infinite number of civilizations doing scientific experiments, any aberrant result will in fact occur occasionally.


Q: Why on Earth are these aberrant results ALWAYS measured in the SAME 'unlucky' civilization??

JesseM, do you get what I’m saying? Your example probably works for ONE experiment, flipping coins, but our whole world is built on an extremely large amount of microscopic "experiments" with different probabilities being realized every nanosecond.

So JesseM, you must explain why this "googolplexian unluckiness" ALWAYS hits the same poor civilization EVERY TIME, and are not evenly spread out over all MWI civilizations, including ours...?:bugeye:?:bugeye:?
 
Last edited:
  • #667
I just found out that also this is probably false:
akhmeteli said:
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

http://en.wikipedia.org/wiki/Mathem...quantum_mechanics#The_problem_of_measurement"
...
note, however, that von Neumann's description dates back to the 1930s and is based on experiments as performed during that time – more specifically the http://en.wikipedia.org/wiki/Compton_scattering" ; it is not applicable to most present-day measurements within the quantum domain
 
Last edited by a moderator:
  • #668
DevilsAvocado said:
So JesseM, you must explain why this "googolplexian unluckiness" ALWAYS hits the same poor civilization EVERY TIME, and are not evenly spread out over all MWI civilizations, including ours...?:bugeye:?:bugeye:?

Who knows what improbable events occur here though? There might even be a universe out there where such crazy stuff as an octopus that can correctly predict the result of all football games exist. :wink:


On another note, most interpretations discussed here seems to be ones with a deterministic core, but why do so many feel the need for the world to be determinstic? Is there really no serious/valid interpretation candidate that describe things as they are seen in the lab, i.e. random and non-local? (shut up and calculate is no interpretation)
 
  • #669
DevilsAvocado said:
Q: Why on Earth are these aberrant results ALWAYS measured in the SAME 'unlucky' civilization??

Do you want a crazy idea? Note: it is just an idea, a model, I don't claim anything.

So, MWI. There are 'normal', 'regular' bracnhes. There are also 'weird' branches, where rare things are happening all the time or sometimes. Among them there are strange branches, where rare things are heppening based on some rule, we can call a pseudo-law.

For example, there is a branch where Uranium nuclei are not decaying on Fridays - at all. Just by pure chance. So far there is no value in what I said - yes, there are some branches, so what?

But now let's assume that consciousness is not possible in 'bare' Universe, but it is possible in Universe + some pseudo-laws. Then only weird branches with psedo-laws are observed! What a conspiracy from nature!
 
  • #670
Zarqon said:
Who knows what improbable events occur here though? There might even be a universe out there where such crazy stuff as an octopus that can correctly predict the result of all football games exist. :wink:

LOL! Yeah, and why doesn’t that weird thing happen here!? :biggrin: And why doesn’t the same octopus settle the FIFA World Cup by simultaneously shooting with all his eight feet in the last penalty shootout?? :smile:

Zarqon said:
On another note, most interpretations discussed here seems to be ones with a deterministic core, but why do so many feel the need for the world to be determinstic? Is there really no serious/valid interpretation candidate that describe things as they are seen in the lab, i.e. random and non-local? (shut up and calculate is no interpretation)

I’m with you all the way Broo. :approve:
 
Last edited:
  • #671
Dmitry67 said:
But now let's assume that consciousness is not possible in 'bare' Universe, but it is possible in Universe + some pseudo-laws. Then only weird branches with psedo-laws are observed! What a conspiracy from nature!

Yeah! And I think there is a name for that conspiracy... the http://en.wikipedia.org/wiki/Anthropic_principle" ! :wink:
 
Last edited by a moderator:
  • #672
Alright, JesseM, I’m going to provide a detailed response in hopes of ending the confusion.

First, I assume by “flipping a coin” you mean a phenomenon with an unequivocally 50-50 outcome. According to Newton’s laws, the literal flipping of a coin will produce a deterministic outcome, so the 50-50 outcome is not ontological, but epistemological. I do science in an effort to explore ontology, not epistemology. In order to do this, I have to make epistemological assumptions. It is one of those assumptions that you and I differ on.

I assume we both agree that there are statistical regularities in Nature. The question is, how are they instantiated? The answer to this question tells us whether or not such regularities can be discovered scientifically. I will argue that, according the JesseM belief (what you call “pure statistics”), it is impossible to know whether or not you have discovered any such regularity. In contrast, according to the RUTA belief, science can discover these regularities. Conclusion: Most scientists probably subscribe to the RUTA belief (either tacitly or explicitly, but at least pragmatically).

Consider a series of experiments designed to find a statistical regularity of Nature (SRN). Each experiment conducts many trials, each with a distribution of outcomes. Many experiments produce many distributions, so that we have a distribution of distributions at any given location in the universe (assumed infinite).

According to the JesseM belief, all conceivable distributions of distributions are instantiated in the universe and only collectively do they yield the SRN being investigated.

According to the RUTA belief, each distribution of distributions yields the SRN being investigated.

P1. We don’t know the SRN under investigation, that’s why we’re doing the experiment.
P2. If JesseM is right, there are distributions of distributions nowhere “near” the SRN. [Define this proximity per a number of “standard deviations” obtained over the distribution of distributions itself. Pick any number you like, since, according to JesseM, all conceivable distributions of distributions are realized.]
C1. Any particular location in the universe doesn’t (and can’t) know whether or not their distribution of distributions is “near” the SRN.
P3. Most scientists believe (tacitly or explicitly, but at least pragmatically) that the distribution of distributions they discover on Earth is “near” the SRN.
P4. The scientists of P3 don’t believe Earth occupies a “special” or “privileged” place in the universe.
C2. Most scientists subscribe to the RUTA belief, not the JesseM belief.

Of course, the point isn’t really about popularity, but epistemological assumptions (tacit or explicit) in an empirically-based exploration of ontology.

Now you should be able to easily and accurately infer my answer to your question about getting “all heads” when “flipping a coin” somewhere in an infinite universe.
 
Last edited:
  • #673
DevilsAvocado said:
Q: Why on Earth are these aberrant results ALWAYS measured in the SAME 'unlucky' civilization??
They're not, you could easily have civilizations which are unlucky for some period of time but whose results then return to the mean. But I was specifically defining "aberrant" relative to a civilization's entire run of experiments over their entire history. In an infinite universe, we might consider the set of all civilizations that do 1 billion runs of a particular experiment in their entire history before their species dies off, for example. For any quantum experiment, do you agree there's some nonzero probability that 1 billion runs of the experiment would yield statistics that are off from the true quantum probabilities by more than some significant amount epsilon? And whatever probability that is, do you agree that in an infinite set of civilizations that do 1 billion runs in their entire history before dying off, that will be the fraction of the set that does get statistics off from the true quantum probabilities by more than epsilon?
DevilsAvacado said:
So JesseM, you must explain why this "googolplexian unluckiness" ALWAYS hits the same poor civilization EVERY TIME, and are not evenly spread out over all MWI civilizations, including ours...?:bugeye:?:bugeye:?
See above. And remember I wasn't talking about MWI civilizations, just about an infinite number of civilizations in a single spatially infinite universe...my point was that RUTA's criticism of the MWI would apply equally to this single-universe case.
 
  • #674
RUTA said:
First, I assume by “flipping a coin” you mean a phenomenon with an unequivocally 50-50 outcome. According to Newton’s laws, the literal flipping of a coin will produce a deterministic outcome, so the 50-50 outcome is not ontological, but epistemological.
The 50/50 probability on an individual trial is not ontological in a deterministic universe, but even in a deterministic universe, for a large set of trials where we flip a coin N times, we should expect that all possible sequences of results occur with equal frequency (for example, if we do 8 million trials where we flip the coin three times and record the result, we'd expect HHH, HHT, HTH, HTT, THH, THT, TTH, and TTT to each occur on about 1 million of the trials). This can be justified using arguments analogous to those in classical statistical mechanics, where we assume all the possible "microstates" associated with a given "macrostate" would occur with equal frequency in the limit of a very large number of trials with the same macrostate.

Do you disagree that for some phenomena with a 50/50 outcome, regardless of whether the uncertainty is epistemological or ontological, we would expect that if a near-infinite number of civilizations were doing a sequence of N tests of that phenomena, all specific sequences of N results would occur with the same frequency relative to this near-infinite set? For example, if each civilization is doing 20 flips of a fair coin, we should expect that about 2-20 of these civilizations get the sequence HTHHTTTHTHHHHTTHTTHT, while about 2-20 of these civilizations get the sequence HHHHHHHHHHHHHHHHHHHH? Each specific sequence occurs with equal frequency, but there are far more possible sequences with close to 10 heads and 10 tails then there are possible sequences with more asymmetrical ratios of heads to tails, and this explains why the average civilization is a lot more likely to see something close to a 50/50 ratio.

It would really help if you would give me a specific answer to whether you agree that the above is statistically correct. If you don't think it's correct, do you think it would still be incorrect if we reduced N from 20 to 3, and replaced multiple civilizations with a single experimenter doing a large run of sequences of 3 tests? If a single experimenter does 8000 sequences of 3 tests, do you disagree with the prediction that about 1000 sequences will give result HHH, about 1000 will give result HHT, and so on for all 8 possible combinations?
RUTA said:
I assume we both agree that there are statistical regularities in Nature. The question is, how are they instantiated? The answer to this question tells us whether or not such regularities can be discovered scientifically. I will argue that, according the JesseM belief (what you call “pure statistics”)
But you still aren't telling me what specific statement about "pure statistics" you think is incorrect, you're just disagreeing with my argument as a whole. I laid out my argument in a step-by-step fashion so we could pinpoint where precisely you think the argument goes off the rails, rather than you just telling me you disagree with the conclusion. Can you pinpoint what specific sentences in the paragraphs above (starting with 'Do you disagree' and 'It would really help') you believe to be incorrect?
RUTA said:
it is impossible to know whether or not you have discovered any such regularity.
It is impossible to "know" with perfect 100% certainty that a given equation accurately describes nature. But science isn't about perfect 100% certainty in anything! It's just about accumulating stronger and stronger evidence for some theories, and I'd say that if we can show that if we are comparing some hypothesis to a null hypothesis, and we find that the null hypothesis would require us to believe a statistical fluctuation with probability 1 in 10^100 had occurred, that's extremely strong evidence that the null hypothesis is false. Perfect 100% certainty only occurs in pure mathematical proofs.
RUTA said:
Conclusion: Most scientists probably subscribe to the RUTA belief (either tacitly or explicitly, but at pragmatically).
I disagree, most scientists would probably agree that we can never have complete 100% certainty in any theory, only accumulate strong evidence for some theories and evidence against others. And most scientists would agree that if a given set of results would only have a probability of 1 in 10^100 according to some null hypothesis, that's very strong evidence against the null hypothesis.
RUTA said:
Consider a series of experiments designed to find a statistical regularity of Nature (SRN). Each experiment conducts many trials, each with a distribution of outcomes. Many experiments produce many distributions, so that we have a distribution of distributions at any given location in the universe (assumed infinite).

According to the JesseM belief, all conceivable distributions of distributions are instantiated in the universe
Well, only if it is in fact true that the universe is infinite in size with an infinite number of civilizations running the same type of experiment. It's possible the universe is actually finite in size. The standard frequentist view of probability is that probabilities represent the statistics that would be seen in an infinite collection of trials of the same experiment, regardless of whether such an infinite collection is actually performed in the real physical universe.
RUTA said:
and only collectively do they yield the SRN being investigated.

According to the RUTA belief, each distribution of distributions yields the SRN being investigated.[/quote]
But a "distribution of distributions" is just a larger distribution. Do you think that the laws of statistics would work differently in these two case?

1) A single long-lived civilization does a large number of trials where each trial consists of N tests (like a coin flip), each trial giving a distribution.
2) A large number of short-lived civilizations do m trials where each trial consists of n tests, each trial giving a distribution, after which these civilizations collapse (due to nuclear war or global warming or whatever). As it so happens, m*n=N, so for each of these short-lived civilizations, their "distribution of distributions" consists of a total of N tests.

Your argument would seem to imply that in case 1), since a given series of N tests is just a single distribution from many collected by that civilization, you accept that a given series might show aberrant results; but somehow in case 2), since the "distribution of distributions" for each of the short-lived civilizations consists of N tests, not one of these civilizations will get aberrant statistics on those N tests (which consists of all tests of a given experiment in their entire history, perhaps lasting hundreds of years before they finally die off). This would be something of a statistical miracle!

If you don't think your argument would actually imply this statistically miraculous conclusion, please clarify.
RUTA said:
P1. We don’t know the SRN under investigation, that’s why we’re doing the experiment.
P2. If JesseM is right, there are distributions of distributions nowhere “near” the SRN. [Define this proximity per a number of “standard deviations” obtained over the distribution of distributions itself. Pick any number you like, since, according to JesseM, all conceivable distributions of distributions are realized.]
C1. Any particular location in the universe doesn’t (and can’t) know whether or not their distribution of distributions is “near” the SRN.
Again, they can't "know" with 100% certainty, but they can be very very confident. If some aberrant "distribution of distributions" would only occur in 1 out of 10^100 civilizations, it's reasonable for any given civilization to conclude there's only a 1 in 10^100 chance that their civilization is one of the ones that gets the aberrant statistics.
RUTA said:
P3. Most scientists believe (tacitly or explicitly, but at least pragmatically) that the distribution of distributions they discover on Earth is “near” the SRN.
P4. The scientists of P3 don’t believe Earth occupies a “special” or “privileged” place in the universe.
Yes, and according to my view of statistics, both beliefs are perfectly reasonable. You seem to think that somehow my view implies such beliefs aren't reasonable, but you've never given a clear explanation as to why that should be the case.
RUTA said:
C2. Most scientists subscribe to the RUTA belief, not the JesseM belief.
I don't believe that. Most scientists would believe the same laws of statistics apply to collections of trials with N tests in cases 1) and 2) above, despite the fact that in 2) each trial represents a "distribution of distributions" for an entire civilization while in 1) a single civilization is doing many such trials with N tests.
RUTA said:
Now you should be able to easily and accurately infer my answer to your question about getting “all heads” when “flipping a coin” somewhere in an infinite universe.
No, I actually am not sure, so please state it outright. Do you really believe that different statistics would apply in a collection of trials with N tests each in case 1) and 2) above, even though the only difference is that in case 1) we are considering a large number of trials done by a single civilization, and in case 2) we are considering a large number of civilizations which each do N tests before dying off?
 
  • #675
akhmeteli said:
1. For example, according to you, Santos "ended up" "convincing a few good people that "all loopholes should be closed simultaneously"", you call it a "questionable conclusion", I see that a genuine contribution to our body of knowledge.

2. With all due respect, I believe you post speculative ideas in this forum with impunity when you question the mainstream fact (not opinion) that there have been no experimental demonstrations of the genuine Bell inequalities.

1. That's what you call a contribution? I guess I have a different assessment of that. Better Bell tests will always be on the agenda and I would say Zeilinger's agreement on that represents no change in his overall direction.

2. I consider your comment in 1. above to be acknowledgment of the obvious, which is that it is generally agreed that Bell Inequality violations have been found in every single relevant test performed to date. "Gen-u-wine" ones at that! So you can try and misrepresent the mainstream all you want, but you are 180 degrees off.

Why don't you call it for what it is: you are part of a very small minority regarding Bell. Where's the disrespect in that? If you are confident, just call yourself a rebel and continue your research.
 
  • #676
billschnieder said:
Your argument does not make any sense to me so I am hoping you could clarify your understanding of the meaning of "probability". If a source has a 0.1-0.9 up-down probability, what does that mean in your understanding according to MWI. Does it mean 10% of the worlds will obtain 100% up and 90% of the worlds will obtain 100% down, or does it mean in every world, there will be 10% up and 90% down? It is not clear from your statements what you mean and what it has that got to do with "correct statistics"?
To be clear, the actual MWI doesn't give a straightforward explanation of probabilities in terms of a frequentist notion of a fraction of worlds where something occurs, instead MWI have to use more subtle arguments involving things like decision theory. When I talked about fractions of worlds or fractions of copies that see some result, I was talking about my "toy model" from post #11 of this thread which was showing how in principle it would be possible to explain Bell inequality violations using a local model where each measurement splits the experimenter into multiple copies. Perhaps someday someone will develop a variant of the MWI that explains probabilities in terms of fractions of copies, but it doesn't exist yet.

Anyway, in terms of a model along the lines of my toy model, if there is a ninety percent chance of getting result N and a ten percent chance of getting result T, that would mean that if an experimenter did a trial involving three tests in a row, after it was done there'd be an ensemble of copies of the experimenter, with (0.9)(0.9)(0.9) = 0.729 of the copies having recorded result "NNN", (0.9)(0.9)(0.1) = 0.081 of the copies having recorded result "NNT", (0.1)(0.1)(0.9) = 0.009 having recorded result "TTN", and so on for all eight possible combinations of recorded results.
billschnieder said:
If I calculate the probability that the sun will explode tomorrow to be 0.00001, what does that mean in MWI. Is your understanding that I am calculating the probability of the sun in "my" world, exploding, or all the suns in the multiverse exploding or what exactly? Or do you think such a probability result does not make sense in science.
Assuming your calculation was correct according to a full QM treatment of the problem, and it represented the probability that our Sun would explode tomorrow given its history up until today, then that would mean tomorrow, in the collection of copies of our solar system that had the same history up until today, the Sun would have exploded in 0.00001 of these copies.
 
  • #677
JesseM said:
The 50/50 probability on an individual trial is not ontological in a deterministic universe, but even in a deterministic universe, for a large set of trials where we flip a coin N times, we should expect that all possible sequences of results occur with equal frequency

Are we talking about assumed statistical or definite outcomes? Are you ascribing the 50-50 outcomes to ontology or epistemology? Of course, we don't know but we have to decide whether or not to cast our hypothetical law in terms of probability (like QM) or certainty (like Newtonian physics). I'm talking about phenomena where we decided that probability is the way to go. The Newtonian analysis of coin flips would not be probabilistic except as regards a lack of knowledge about initial and boundary conditions (chaos theory).

JesseM said:
Do you disagree that for some phenomena with a 50/50 outcome

Where this means the TRUE outcome, not what any particular civilization finds, but the REAL underlying principle of Nature. [Which, we tacitly, explicitly or pragmatically assume exists and can be discovered empirically when we do science.]


JesseM said:
regardless of whether the uncertainty is epistemological or ontological, we would expect that if a near-infinite number of civilizations were doing a sequence of N tests of that phenomena, all specific sequences of N results would occur with the same frequency relative to this near-infinite set?

This is precisely where we disagree. I say ALL civilizations will empirically deduce the 50-50 law. You say some civilization (in the infinite universe case) will find 90-10 (and other non 50-50 results) and conclude 90-10 (and other non 50-50 results) is the REAL underlying principle of Nature.

It's that simple, JesseM. That's where we disagree. For some reason you think it's innane to believe that the REAL underlying statistical law of Nature is discovered by ALL civilizations in an infinite universe. I have to admit, you could be right. But, as a scientist who doesn't believe he lives in a "special" or "privileged" civiliation and does believe it's possible to do science, i.e., discover empirically the REAL underlying statistical law of Nature, I HAVE to believe you're wrong and that I'm right -- tacitly explicitly, or at least pragmatically. If I REALLY believed in your interpretation, I would HAVE to admit that doing science is impossible. So, why would I subscribe to your belief when there's no more argument for it than mine and in mine, I get to do science with a clear conscience?
 
  • #678
JesseM said:
The 50/50 probability on an individual trial is not ontological in a deterministic universe, but even in a deterministic universe, for a large set of trials where we flip a coin N times, we should expect that all possible sequences of results occur with equal frequency
RUTA said:
Are we talking about assumed statistical or definite outcomes?
In the limit as the number of trials approaches infinity, the statistics in the actual outcomes should approach the "true" probabilities determined by the laws of nature with probability 1, that's the law of large numbers. So, I'm talking about actual outcomes.
RUTA said:
Are you ascribing the 50-50 outcomes to ontology or epistemology?
I'm saying that, as an ontological fact, the nature of the laws of physics and the experiment are such that if the experiment was repeated under the same conditions many times, in the limit as the number of trials went to infinity the ratio of one outcome to another would approach 50/50. But this does not mean that there is any true ontological randomness on individual trials, since it may be that the outcome on each trial is completely determined by the initial "microstate" of the coin, coinflipper, and nearby environment. That's what I meant when I said:
This can be justified using arguments analogous to those in classical statistical mechanics, where we assume all the possible "microstates" associated with a given "macrostate" would occur with equal frequency in the limit of a very large number of trials with the same macrostate.
I assume you're probably familiar with how probabilities are derived from an ensemble of microstates in classical statistical mechanics, where the laws of physics are assumed to be completely deterministic?
JesseM said:
Do you disagree that for some phenomena with a 50/50 outcome
RUTA said:
Where this means the TRUE outcome, not what any particular civilization finds, but the REAL underlying principle of Nature. [Which, we tacitly, explicitly or pragmatically assume exists and can be discovered empirically when we do science.]
Sure, you're free to assume that indeterminism is fundamental and that the most fundamental laws of nature can only tell you there's a 50/50 chance on each trial. Although as I said above, you're also free to assume a deterministic universe where the future evolution is totally determined by the initial microstate, but as the number of trials approaches infinity each microstate compatible with the experiment's initial macrostate would occur with equal frequency. Still, for the sake of this discussion let's go with the first option and say the indeterminism is fundamental.
JesseM said:
regardless of whether the uncertainty is epistemological or ontological, we would expect that if a near-infinite number of civilizations were doing a sequence of N tests of that phenomena, all specific sequences of N results would occur with the same frequency relative to this near-infinite set?
RUTA said:
This is precisely where we disagree. I say ALL civilizations will empirically deduce the 50-50 law.
OK, so to be clear, if the laws of physics do indeed give a 50/50 law, you're saying that if a single series of N tests is done by a very long-lived civilization which has time to do many additional series of N tests, then that individual series is not guaranteed to yield N/2 of result #1 and N/2 of result #2? But if a series of N tests is done by a civilization which only has time to do N tests before it dies out, you think they are guaranteed to find N/2 of result #1 and N/2 of result #2?

If so, is this just an assumption you think each civilization much make for epistemological purposes, or do you think if we could actually travel through the universe and surreptitiously observe many civilizations over the course of their entire histories, we would actually see that this was the case?
RUTA said:
For some reason you think it's innane to believe that the REAL underlying statistical law of Nature is discovered by ALL civilizations in an infinite universe.
Yes, just because I believe that same laws of statistics apply to multiple civilizations as would apply to multiple series of experiments performed by a single long-lived civilization. To suggest otherwise would be seem nothing short of supernatural, as if the fundamental laws of physics could anticipate how long each civilization was going to last and would tailor their statistics to that.
RUTA said:
But, as a scientist who doesn't believe he lives in a "special" or "privileged" civiliation
I don't either. I just think that if some anomalous results would only be seen by 1 out of 10^100 civilizations (or some other astronomically small probability), then there is only a 1 out of 10^100 probability that my civilization happens to be one that's getting such an anomalous result. It's the civilizations that get anomalous results over huge numbers of trials that are "special", not the ones that get results very close to the true probabilities.
RUTA said:
and does believe it's possible to do science, i.e., discover empirically the REAL underlying statistical law of Nature
So do I, unless by "discover empirically" you mean "know with absolute 100% certainty" as opposed by "be confident beyond all reasonable doubt". As I said, only in math can you know anything with perfect 100% certainty, we cannot even be 100% sure that the Earth is round, only 99.999999% (or whatever) sure.
RUTA said:
If I REALLY believed in your interpretation, I would HAVE to admit that doing science is impossible.
Why? You just keep saying this but never explain what you mean by "doing science", or why you think I am not "doing science" if I show that there is only a 1 in 10^100 chance that my experimental results differ from the true probabilities by more than some small amount epsilon.
RUTA said:
So, why would I subscribe to your belief when there's no more argument for it than mine
The argument for mine is just that the same laws of statistics apply everywhere, that the fundamental laws of physics don't have high-level knowledge of what a "civilization" is and how long each civilization will last, and use this to give different results in a series of N tests depending on whether they are done in a civilization that only does N tests before dying out or a civilization that is more long-lived.
 
  • #679
"OK, so to be clear, if the laws of physics do indeed give a 50/50 law, you're saying that if a single series of N tests is done by a very long-lived civilization which has time to do many additional series of N tests, then that individual series is not guaranteed to yield N/2 of result #1 and N/2 of result #2? But if a series of N tests is done by a civilization which only has time to do N tests before it dies out, you think they are guaranteed to find N/2 of result #1 and N/2 of result #2?

If so, is this just an assumption you think each civilization much make for epistemological purposes, or do you think if we could actually travel through the universe and surreptitiously observe many civilizations over the course of their entire histories, we would actually see that this was the case?"

Both civilizations will deduce the 50-50 law (or whatever the REAL law is). That's MY assumption and it is made precisely for the reasons I stated (and will not repeat).

Your confusion arises because you haven't recognized YOUR bias, i.e., you believe a statistical law of Nature will yield all conceivable distributions given enough trials. I don't subscribe to it for the reasons I stated (and will not repeat).

I can't be any clearer, JesseM. If you still don't get it, you probably never will -- no pun intended :-)
 
  • #680
RUTA said:
JesseM said:
"OK, so to be clear, if the laws of physics do indeed give a 50/50 law, you're saying that if a single series of N tests is done by a very long-lived civilization which has time to do many additional series of N tests, then that individual series is not guaranteed to yield N/2 of result #1 and N/2 of result #2? But if a series of N tests is done by a civilization which only has time to do N tests before it dies out, you think they are guaranteed to find N/2 of result #1 and N/2 of result #2?

If so, is this just an assumption you think each civilization much make for epistemological purposes, or do you think if we could actually travel through the universe and surreptitiously observe many civilizations over the course of their entire histories, we would actually see that this was the case?"

Both civilizations will deduce the 50-50 law (or whatever the REAL law is). That's MY assumption and it is made precisely for the reasons I stated (and will not repeat).
OK, but just saying that both civilizations will deduce it does not tell me if you actually believe that if a civilization does a test exactly N times before collapsing, it will get exactly N/2 with each result. After all, part of the way we deduce laws of nature is just by looking for elegant and simple equations that agree closely with all experiments--even if someone did a meta-analysis of all experiments every done measuring spin of fermions and found that it was actually 50.000001% of experiments that gave spin-up, that wouldn't cause them to change the equations of quantum physics which would become more ungainly and inelegant if you tried to make them agree with this result.

So, can you please tell me if you think that if an experiment does the experiment N times in its history before civilization ends, they will get exactly N/2 of each outcome? Yes or no?

Incidentally, it occurs to me that the laws of quantum physics don't just give probability distributions for single measurements, they also give probability distributions for the possible statistics seen on a series of N measurements, even if N is very large. Would you disagree that for an experiment with spin that QM predicts has a 1/2 chance of yielding either result, it's also a prediction of QM that there is a probability of 1/2N that a series of N spin measurements on different particles (whose spin is unknown until measurement) will yield the result spin-up every time? So in other words, if you say the probability is zero that this will happen whenever N is the total number of experiments performed by a given civilization, you are saying that the equations of QM actually give incorrect predictions in this case?
RUTA said:
I can't be any clearer, JesseM. If you still don't get it, you probably never will -- no pun intended :-)
I get that you believe it would be impossible to "do science" if there was the slightest chance (even 1 in 10^100 or whatever) that the statistics collected over our entire history could be badly off from the true probabilities due to random statistical fluctuation, but I don't get why you believe this, because you haven't defined what you mean by "do science" (for example, you haven't told me whether you think that science requires us to be able to be 100% certain a theory is empirically correct, or whether you agree that strong evidence which convinces us the theory is correct beyond all reasonable doubt is the best that empirical science can hope to achieve...and if the latter, I don't see why a 1 in 10^100 chance that the null hypothesis could give the observed results doesn't qualify as strong evidence which convinces beyond all reasonable doubt that the null hypothesis is empirically false)
 
  • #681
JesseM said:
OK, but just saying that both civilizations will deduce it does not tell me if you actually believe that if a civilization does a test exactly N times before collapsing, it will get exactly N/2 with each result. After all, part of the way we deduce laws of nature is just by looking for elegant and simple equations that agree closely with all experiments--even if someone did a meta-analysis of all experiments every done measuring spin of fermions and found that it was actually 50.000001% of experiments that gave spin-up, that wouldn't cause them to change the equations of quantum physics which would become more ungainly and inelegant if you tried to make them agree with this result.

So, can you please tell me if you think that if an experiment does the experiment N times in its history before civilization ends, they will get exactly N/2 of each outcome? Yes or no?

In the experiment I referenced (Dehlinger & Mitchell), the result was quoted as S = 2.307 +/- 0.035. Have you done experimental physics? Do you understand where the +/- 0.035 comes from? If so, then it should be abundantly clear to you what I mean by " every civilization is able to empirically determine the REAL law." If not, do some experimental physics then come back and we'll talk.

JesseM said:
Incidentally, it occurs to me that the laws of quantum physics don't just give probability distributions for single measurements, they also give probability distributions for the possible statistics seen on a series of N measurements, even if N is very large. Would you disagree that for an experiment with spin that QM predicts has a 1/2 chance of yielding either result, it's also a prediction of QM that there is a probability of 1/2N that a series of N spin measurements on different particles (whose spin is unknown until measurement) will yield the result spin-up every time? So in other words, if you say the probability is zero that this will happen whenever N is the total number of experiments performed by a given civilization, you are saying that the equations of QM actually give incorrect predictions in this case?

Again, you are conflating "conceivable" with "realizable." Just because you imagine it to be so, and indeed may even USE these numbers to do a computation, doesn't entail the results will be realized by any particular civilization. There's nothing in the analysis that says all conceivable distributions will be realized given an infinite number of trials. You are adding that to the formalism as an assumption.

JesseM said:
I get that you believe it would be impossible to "do science" if there was the slightest chance (even 1 in 10^100 or whatever) that the statistics collected over our entire history could be badly off from the true probabilities due to random statistical fluctuation, but I don't get why you believe this, because you haven't defined what you mean by "do science" (for example, you haven't told me whether you think that science requires us to be able to be 100% certain a theory is empirically correct, or whether you agree that strong evidence which convinces us the theory is correct beyond all reasonable doubt is the best that empirical science can hope to achieve...and if the latter, I don't see why a 1 in 10^100 chance that the null hypothesis could give the observed results doesn't qualify as strong evidence which convinces beyond all reasonable doubt that the null hypothesis is empirically false)

I did define "doing science." Here is the bottom line: I'm a theoretical physicist, but I have done and taught experimental physics to include error analysis. Perhaps you have also engaged in these activities, maybe you're even an experimental physicist, but the bottom line is that we possesses different underlying assumptions as to what the nature of reality allows us to conclude about our theories and experiments. My underlying assumption: All civilizations will observe the same distribution (in accord with the REAL law) when conducting the same experiments, even if there are an infinite number of such civilizations. Your underlying assumption: The distributions obtained by all civilizations will mirror all those that are conceivable per the REAL law.

You can keep asking me the same questions (just reworded), and I can keep giving you the same answers (just reworded), but at this point, if you don't understand, you'll just have to live with it. I'm done.
 
  • #682
RUTA said:
In the experiment I referenced (Dehlinger & Mitchell), the result was quoted as S = 2.307 +/- 0.035. Have you done experimental physics? Do you understand where the +/- 0.035 comes from?
Without seeing the details of their error analysis I don't know exactly, but from my understanding of significance testing, the usual procedure would be to take the experimental mean E (like E=2.307), then pick a confidence interval--say, two standard deviations or 95%--and then pick some a statistical distribution that makes sense for the experiment like the normal distribution, and find two different normal distributions with different means, one with a mean M1 lower than E such that exactly 95% of the area under the curve is between (2M1 - E) and E (note that the midpoint between 2M1-E and E is M1), and another with a mean M2 greater than E such that exactly 95% of the area under the curve is between between E and (2M2 - E) (the midpoint between E and 2M2-E is M2). So, in effect you're considering an infinite set of null hypotheses with different means, and then looking for the ones with the lowest and highest possible means M1 and M2 such that the experimentally-observed mean E would not lie outside the middle 95% of your null hypothesis (causing you to reject it). If you are considering a set of normal distributions with different means as your null hypotheses, then this is actually equivalent to just finding a single normal distribution centered on E and then finding an M1 and M2 such that 95% of the area under the curve lies between M1 and M2, but in general if you aren't necessarily assuming normal distributions as your null hypotheses, you have to consider two different distributions which are not centered on E as described above. See the discussion on p. 4-6 of this book on confidence intervals along with the "two-distribution representation on page 9.

Anyway, the point is that if the authors used this type of procedure, then it doesn't mean they are 100% confident that the true mean lies in the range 2.307 +/- 0.035. Rather, it just means that if the true distribution is of the type they assumed in calculating the confidence interval (most likely a normal distribution), and if the true mean M of the distribution lies in the range 2.307 +/- 0.035, then 95% of all samples from this distribution will lie within a region centered on M which includes the experimentally observed value of 2.307 (which for a normal distribution is equivalent to saying that if a large number of samples are taken from a distribution with mean M, and each sample is used to construct a 95% confidence interval, then 95% of the confidence intervals will include the value M--see the bottom of p. 1 in http://philosophy.ucsd.edu/Courses/winter06/WhatisConfIntervalSober.pdf ). But if the true mean M lies outside the range 2.307 +/- 0.035, then it'd still be true that 5% of all samples from this distribution will be in the "tails" of the distribution, and the tails would include the experimentally observed value of 2.307. So, you have no basis for being totally certain the true mean M lies in the range 2.307 +/- 0.035, since even if it isn't there is still some nonzero chance of getting the experimental result 2.307.
RUTA said:
If so, then it should be abundantly clear to you what I mean by " every civilization is able to empirically determine the REAL law."
No, it isn't. If you are engaging in good-faith intellectual discussion rather than just using rhetoric to make a case against my position, I think it is reasonable to ask that you actually give me a straight answer to a simple yes-or-no question like the one I asked:
So, can you please tell me if you think that if a civilization does the experiment N times in its history before civilization ends, they will get exactly N/2 of each outcome? Yes or no?
RUTA said:
Again, you are conflating "conceivable" with "realizable." Just because you imagine it to be so, and indeed may even USE these numbers to do a computation, doesn't entail the results will be realized by any particular civilization.
Would you agree with the claim that the law of large numbers says that as the number of trials approaches infinity, the observed statistics should approach the true probabilities given by the fundamental laws of physics with probability 1? (so if theoretical QM says the probability of getting N spin-ups in a row is 1/2N, then in the limit as the number of civilizations that perform N spin measurements approaches infinity, the fraction that get all spin-ups should approach 1/2N) Or perhaps you would agree that this should be true according to the law of large numbers, but you don't believe the law of large numbers would actually hold even in an infinite universe with an infinite number of trials of any experiment?
RUTA said:
There's nothing in the analysis that says all conceivable distributions will be realized given an infinite number of trials. You are adding that to the formalism as an assumption.
But it's just the assumption that the law of large numbers is correct, which is a founding assumption in statistics. Without it I don't see how you could justify the claim that a large set of trials is more likely to give statistics close to the "true" values than a small number! Perhaps you can think of some way of justifying it in terms of your own statistical philosophy, but if you are arguing that most other physicists would agree that the law of large numbers does not apply in any case where the number of trials approaches infinity (including an infinite universe where each civilization does a finite number of trials but the number of civilizations is infinity), I think that's extremely unlikely.
RUTA said:
I did define "doing science."
In what post? Did you define it in such a way that would tell me whether you think we need to be able to have perfect 100% certainty in some result or we aren't "doing science", or whether you agree that we are still "doing science" if we reject hypotheses based on the fact that they are astronomically unlikely to produce the experimentally-observed results?
RUTA said:
My underlying assumption: All civilizations will observe the same distribution (in accord with the REAL law) when conducting the same experiments, even if there are an infinite number of such civilizations.
But that's not in accord with the "REAL law" as I understand it, because a statistical law like QM doesn't just predict the average expected value, it also predicts a specific probability distribution on any sequence of results.
RUTA said:
You can keep asking me the same questions (just reworded), and I can keep giving you the same answers (just reworded), but at this point, if you don't understand, you'll just have to live with it. I'm done.
You may think that your answer to the questions I ask should be obvious from your previous answers, but to me they aren't, because I can think of various alternatives that seem consistent with your previous answers. For example, to the question of whether a 50/50 probability implies that a civilization that does exactly N trials in its history will get exactly N/2 of each result, it would be consistent with your previous answers if you said "yes, I think the number must be precisely N/2, not one more or less" but it would also seem consistent with your previous answers if you said "no, I just think that if they use their data to construct a confidence interval, the true value is guaranteed to lie in that confidence interval." Likewise, to the question about the law of large numbers, you might say "I agree that the law of large numbers applies to the probabilities in QM, but I think the theory of QM only deals with probabilities of individual measurements, it doesn't even define a probability distribution on the 2N conceivable results for a sequence of N measurements" or "no, I reject the law of large numbers altogether". To the question of whether you think we'd see different statistics on a bunch of sequences of N trials depending on whether each sequence represented the entire history of a bunch of short-lived civilizations or they were each just a fraction of the trials done by a long-lived civilization, you might say "yes, I think the statistics would be different" or "no, I think that for any sufficiently large N, you're guaranteed to see statistics equal to the true probabilities, or at least statistics where if you use them to build a confidence interval the true probabilities are guaranteed to be in that confidence interval." And to the question of whether "doing science" requires perfect certainty you might say "yes, unless we can achieve perfect certainty some theory is correct there can be no science" or "no, showing that it's astronomically unlikely a given theory would produce the observed results is fine, I just don't believe that even an astronomically small fraction of civilizations will get data that forces them to conclude that about a theory that's actually correct". I could go on, but the point is that my uncertainty is genuine, and I don't think it's due to poor reading comprehension on my part, I think others reading your posts and looking at the alternative possible answers I suggest would also be unsure of which answer you'd give, based on your previous statements.
 
Last edited by a moderator:
  • #683
akhmeteli said:
Local realism ruled out?

Einstein's dream is definitely a degenerative scientific program (at least it seems so at this moment in time). But locality (via the idea that nature conspire somehow - strong determinism at Planck level is one radical version - to produce the results seen in Aspect et altri types of experiments) is still perene. Counterfactual definiteness is definitely a weak link in the chain of reasoning behind the rejection of locality, accepting it in the premises is indeed of 'bon sens' at this moment in time but this in no way make its rejection a dead end. So even if this kind of hypothesis is situated rather low now on a not (strongly) prescriptive list of viable scientific programs (one can argue that it is stagnant) it is a mistake to jump to the much stronger conclusion that locality is dead. The future may still be full of surprises.
 
  • #684
DrChinese said:
You apparently cannot, as you put forth your opinions as fact. Further, you apparently cannot tell the difference between ad hoc speculation and evidence based opinions. To reasonable people, there is a difference.

Whatever you say, it is a mainstream fact that there has been no experimental evidence of violations of the genuine Bell inequalities. I supported this statement with quotes. Your denial of this fact does not seem reasonable. It just does not seem reasonable. Zeilinger does not know about such evidence, Shimony does not know about such evidence, Genovese does not know about such evidence, however you do know about it. Why don't you enlighten them? I am sure they will be grateful.

DrChinese said:
There is a huge difference in your speculation on loopholes (notice how you cannot model the behavior of these despite your unsupported claims) and RUTA's opinions (which he can model nicely using both standard and original science).

Look, neither you nor RUTA think local realism has a snowball's chance in hell. I have no problems with that. But, unlike you, RUTA does not deny two facts (his post 618 in this thread):

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

So, while our opinions differ wildly, our facts don't. Whereas you deny at least one of them (your post 621 in this thread), although you seem to know about Bell experiments much more than I do. The situation is very simple. In all experiments, either spatial separation is not sufficient, so the Bell inequalities can be violated in local realistic theories as well, or the Bell inequalities are doctored using the fair sampling assumption, so it's not the genuine Bell inequalities that are violated.

I am not even sure if you deny the second statement. On the one hand, your "disagree" may relate to both statements, on the other hand, you say for some reason that "If QM is wrong, so be it."

Anyway, I just cannot understand why you choose to deny a mainstream fact. Are you trying to be "holier than thou"? But we are not discussing religious issues, for god's sake.

And, by the way, what unsupported claims exactly?
 
  • #685
akhmeteli said:
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.
Since you distinguish the projection postulate from the Born rule, would you acknowledge that Bell's proof of the incompatibility between QM and local realism only depends on the idea that empirical observations will match those given by applying the Born rule to the wavefunction at the moment the observations are made? Also, do you agree that other "interpretations" of QM that don't require any special postulate about measurement, like Bohmian mechanics, also predict Bell inequality violations in the type of experiment examined by Bell? Since Bell's proof is just based on deducing the consequences of local realism for these types of experiments, and only at the end does he need to make reference to QM to compare its predictions to those of local realism, you could easily modify the proof to show "local realism's predictions about these experiments are inconsistent with any model whose predictions about empirical results match those of Bohmian mechanics".
 
  • #686
JesseM said:
Since you distinguish the projection postulate from the Born rule, would you acknowledge that Bell's proof of the incompatibility between QM and local realism only depends on the idea that empirical observations will match those given by applying the Born rule to the wavefunction at the moment the observations are made? Also, do you agree that other "interpretations" of QM that don't require any special postulate about measurement, like Bohmian mechanics, also predict Bell inequality violations in the type of experiment examined by Bell? Since Bell's proof is just based on deducing the consequences of local realism for these types of experiments, and only at the end does he need to make reference to QM to compare its predictions to those of local realism, you could easily modify the proof to show "local realism's predictions about these experiments are inconsistent with any model whose predictions about empirical results match those of Bohmian mechanics".

JesseM,

Sorry, I owe you replies to your previous posts, as, on the one hand, I am busy at work right now, on the other hand, some of your posts take quite some time to reply.

But let me try to reply to this post so far.

No, I don't agree "that Bell's proof of the incompatibility between QM and local realism only depends on the idea that empirical observations will match those given by applying the Born rule to the wavefunction at the moment the observations are made". As I said, to prove that the inequalities can be violated in quantum theory, you need to calculate correlations. In the process, you use the projection postulate, assuming that as soon as you measured the spin projection for one particle to be +1, the spin projection for the other particle immediately becomes definite and equal to -1. This postulate introduces nonlocality directly and shamelessly:-) I don't know a proof that would use the Born rule only. And in the experiment, you actually conduct two measurements for two particles.

And I don't agree that, say, "Bohmian mechanics also predicts Bell inequality violations in the type of experiment examined by Bell", not without using something like the projection postulate. I wrote about that in my post 660 in this thread (at the end). I said there "I don't know", and I don't, but I won't agree with that until I see a reference to a proof. As I said, I very much doubt that this can be proven in Bohmian mechanics without something like the projection postulate. If it could be done, it seems there would be no problem to translate this proof into a proof for standard quantum theory. As I said, according to Demystifier, for example, the projection postulate is an approximation in Bohmian mechanics. And as Bohmian mechanics embraces unitary evolution, and as unitary evolution contradicts the projection postulate, I am sure the latter cannot be anything but an approximation in Bohmian mechanics. Otherwise Bohmian mechanics would inherit the contradictions of the standard quantum theory.
 
  • #687
akhmeteli said:
JesseM,

Sorry, I owe you replies to your previous posts, as, on the one hand, I am busy at work right now, on the other hand, some of your posts take quite some time to reply.
No problem.
akhmeteli said:
No, I don't agree "that Bell's proof of the incompatibility between QM and local realism only depends on the idea that empirical observations will match those given by applying the Born rule to the wavefunction at the moment the observations are made". As I said, to prove that the inequalities can be violated in quantum theory, you need to calculate correlations. In the process, you use the projection postulate, assuming that as soon as you measured the spin projection for one particle to be +1, the spin projection for the other particle immediately becomes definite and equal to -1.
Why is that necessary? The wavefunction for an entangled system assigns an amplitude to joint states like |01> and |00>, no? So can't you just apply the Born rule once to find the probability of a given joint state?
akhmeteli said:
And I don't agree that, say, "Bohmian mechanics also predicts Bell inequality violations in the type of experiment examined by Bell", not without using something like the projection postulate.
Bohmian mechanics doesn't require the projection postulate. It just says that the particles have a well-defined position at all times, and measurement outcomes all depend on that position. Have you read the full Stanford Encyclopedia article on Bohmian mechanics? From section 4:
In the Bohmian mechanical version of nonrelativistic quantum theory, quantum mechanics is fundamentally about the behavior of particles; the particles are described by their positions, and Bohmian mechanics prescribes how these change with time. In this sense, for Bohmian mechanics the particles, described by their positions, are primary, or primitive, while the wave function is secondary, or derivative.

...

This demonstrates that all claims to the effect that the predictions of quantum theory are incompatible with the existence of hidden variables, with an underlying deterministic model in which quantum randomness arises from averaging over ignorance, are wrong. For Bohmian mechanics provides us with just such a model: For any quantum experiment we merely take as the relevant Bohmian system the combined system that includes the system upon which the experiment is performed as well as all the measuring instruments and other devices used in performing the experiment (together with all other systems with which these have significant interaction over the course of the experiment) ... The initial configuration is then transformed, via the guiding equation for the big system, into the final configuration at the conclusion of the experiment. It then follows that this final configuration of the big system, including in particular the orientation of instrument pointers, will also be distributed in the quantum mechanical way, so that this deterministic Bohmian model yields the usual quantum predictions for the results of the experiment.
So the idea is that the same dynamical equation guides the behavior of all components of the system from beginning to end (which each have a single well-defined position at all times, no superpositions involved), with no discontinuities along the lines of the projection postulate. And at the end, the state of the system includes the state of all "instrument pointers", so that gives you the Bohmian predictions about empirical results, which always agree with "the usual quantum predictions for the results of the experiment" (which I think would just mean the predictions you get by taking the wavefunction for the whole system, evolving it to the time of the end of the experiment, and using the Born rule on the amplitudes for different joint outcomes).

The article repeats the idea that Bohmian mechanics deals fundamentally with position in section 5:
Bohmian mechanics has been presented here as a first-order theory, in which it is the velocity, the rate of change of position, that is fundamental: it is this quantity, given by the guiding equation, that is specified by the theory, directly and simply
And in section 7:
By contrast, if, like Einstein, we regard the description provided by the wave function as incomplete, the measurement problem vanishes: With a theory or interpretation like Bohmian mechanics, in which the description of the after-measurement situation includes, in addition to the wave function, at least the values of the variables that register the result, there is no measurement problem. In Bohmian mechanics pointers always point.
Sections 7 and 8 also explain why Bohmian predictions end up being the same as predictions made using the standard pragmatic recipe involving "collapse" during measurement, in spite of the fact that Bohmian mechanics says nothing new or different really happens during measurement (the answer seems to involve the Bohmian version of decoherence).

Section 14 explains that the guiding equation of Bohmian dynamics describes the evolution of "configurations", and configurations are just specifications of the positions of every part of the system (which may be 'hidden variables' if we haven't measured the position of any given part at any given moment):
Nor can Bohmian mechanics easily be modified to become Lorentz invariant. Configurations, defined by the simultaneous positions of all particles, play too crucial a role in its formulation, the guiding equation defining an evolution on configuration space.
And section 15 says:
The Bohmian account of the two-slit experiment, in Section 6, and its resolution of the measurement problem (or the paradox of Schrödinger's cat), in Section 7, are simple and straightforward. With regard to the latter, in Bohmian mechanics particles always have definite positions, and hence pointers, which are made of particles, always point.
Anyway, I think you get the idea: Bohmian mechanics doesn't require anything like the projection postulate because it just gives a deterministic equation for the positions of all the particles in a given system, including the particles in measuring-devices ('pointers'). If you haven't read the full article I really recommend doing so, it's very informative.
akhmeteli said:
I wrote about that in my post 660 in this thread (at the end). I said there "I don't know", and I don't, but I won't agree with that until I see a reference to a proof. As I said, I very much doubt that this can be proven in Bohmian mechanics without something like the projection postulate.
As noted above, you seem to be misunderstanding something very basic about Bohmian mechanics, it has no need of the projection postulate because it assumes all particles have unique positions at all times (no spread-out superpositions), including particles in measuring-devices, and the evolution of these positions is given by a deterministic "guiding equation". So, the statistics it predicts for multiple trials of some experiment would just be derived in a straightforward way from the statistics it predicts for pointer states on multiple trials (and as discussed in section 9, the reason it can be used to derive statistical predictions despite having a deterministic guiding equation is basically identical to how you get statistical predictions in classical statistical mechanics--just as there are multiple microstates compatible with a given observed macrostate in statistical mechanics, and we assume each possible microstate is equally probable, similarly in Bohmian mechanics there are multiple hidden-variable configurations compatible with a given observed quantum state, and it's assumed that each of those is equally likely).

As for whether these Bohmian predictions agree with those made using the usual recipe of wavefunction evolution + Born rule, I already quoted section 4 saying "so that this deterministic Bohmian model yields the usual quantum predictions for the results of the experiment", and the last paragraph of section 13 says:
The nonlocality of Bohmian mechanics has a remarkable feature: it is screened by quantum equilibrium. It is a consequence of the quantum equilibrium hypothesis that the nonlocal effects in Bohmian mechanics don't yield observable consequences that are also controllable — we can't use them to send instantaneous messages. This follows from the fact that, given the quantum equilibrium hypothesis, the observable consequences of Bohmian mechanics are the same as those of orthodox quantum theory, for which instantaneous communication based on quantum nonlocality is impossible (see Eberhard 1978).
akhmeteli said:
If it could be done, it seems there would be no problem to translate this proof into a proof for standard quantum theory.
Proof of what? Are you still talking about my statement "Bohmian mechanics also predicts Bell inequality violations in the type of experiment examined by Bell"? And what would the analogous "proof for standard quantum theory" be--just a proof that standard quantum theory predicts Bell inequality violations? (again, you can show this by just applying the Born rule to joint states, which are assigned amplitudes by the wavefunction)
akhmeteli said:
As I said, according to Demystifier, for example, the projection postulate is an approximation in Bohmian mechanics.
Well, as I said, the meaning of his words is unclear, he might have just meant that Bohmian mechanics reproduces the exact same statistics as you'd get using the projection postulate, but that it does so using a different fundamental equation and without assuming anything special actually happens during measurement. It's also possible Demystifier would distinguish between the procedure of repeatedly applying the projection postulate for multiple measurements vs. assuming unitary evolution until the very end of a series of measurements and then applying the Born rule to find the probabilities for different possible combinations of recorded outcomes for all the previous measurements, and that he would say there are cases where Bohmian mechanics would predict slightly different statistics from the first case but not from the second case.

In any case, the Stanford Encyclopedia article was written by a professional physicist, Sheldon Goldstein, who advocates Bohm's interpretation, and in it he makes some quite ambiguous statements like "given the quantum equilibrium hypothesis, the observable consequences of Bohmian mechanics are the same as those of orthodox quantum theory". If Demystifier would actually disagree with statements like that (and I don't think he would), I would tend to trust Goldstein's expertise over Demystifier's. Also, p. 50 of this book says that in any situation where the standard version of QM makes definite predictions, Bohmian mechanics makes the same predictions (though the author considers the possibility there might be situations where the standard version doesn't make clear predictions, like an observable which can't be represented as a Hermitian operator):
The important question remains (quite crucial particularly from the pragmatic point of view) of whether or not Bohm's model and the standard interpretation are indeed observationally completely equivalent. Of course in typical experiments if the calculation of any measurable quantity is unambiguously formulated, then both these interpretations yield the same predictions when the (common) formalism is applied. In an interview with Home [132] in 1986, when asked whether there were new predictions from his model, Bohm responded: "Not the way it's done." Bell [133] made a similar point but a bit more circumspectly: "It (de Broglie-Bohm version of nonrelativistic quantum mechanics) is experimentally equivalent to the usual version insofar as the latter is unambiguous."
 
  • #688
JesseM said:
No problem.

Why is that necessary? The wavefunction for an entangled system assigns an amplitude to joint states like |01> and |00>, no? So can't you just apply the Born rule once to find the probability of a given joint state?

I'll try to answer your questions one at a time, otherwise I'll never be able to handle them:-)


I don't quite get it. In Bell experiments, you need correlations. You need two measurements on the entangled system (maybe you could design some measurement to measure the correlation directly in one measurement, but practical Bell experiments require two measurements). Therefore, you need to apply the Born rule twice to predict something for these measurements.
 
  • #689
I will skip a large part of the quote
JesseM said:
Sections 7 and 8 also explain why Bohmian predictions end up being the same as predictions made using the standard pragmatic recipe involving "collapse" during measurement, in spite of the fact that Bohmian mechanics says nothing new or different really happens during measurement (the answer seems to involve the Bohmian version of decoherence).


As for whether these Bohmian predictions agree with those made using the usual recipe of wavefunction evolution + Born rule, I already quoted section 4 saying "so that this deterministic Bohmian model yields the usual quantum predictions for the results of the experiment", and the last paragraph of section 13 says:


Well, as I said, the meaning of his words is unclear, he might have just meant that Bohmian mechanics reproduces the exact same statistics as you'd get using the projection postulate, but that it does so using a different fundamental equation and without assuming anything special actually happens during measurement. It's also possible Demystifier would distinguish between the procedure of repeatedly applying the projection postulate for multiple measurements vs. assuming unitary evolution until the very end of a series of measurements and then applying the Born rule to find the probabilities for different possible combinations of recorded outcomes for all the previous measurements, and that he would say there are cases where Bohmian mechanics would predict slightly different statistics from the first case but not from the second case.

In any case, the Stanford Encyclopedia article was written by a professional physicist, Sheldon Goldstein, who advocates Bohm's interpretation, and in it he makes some quite ambiguous statements like "given the quantum equilibrium hypothesis, the observable consequences of Bohmian mechanics are the same as those of orthodox quantum theory". If Demystifier would actually disagree with statements like that (and I don't think he would), I would tend to trust Goldstein's expertise over Demystifier's. Also, p. 50 of this book says that in any situation where the standard version of QM makes definite predictions, Bohmian mechanics makes the same predictions (though the author considers the possibility there might be situations where the standard version doesn't make clear predictions, like an observable which can't be represented as a Hermitian operator):

In the same article Goldstein writes:
"The second formulation of the measurement problem, though basically equivalent to the first one, suggests an important question: Can Bohmian mechanics itself provide a coherent account of how the two dynamical rules might be reconciled? How does Bohmian mechanics justify the use of the "collapsed" wave function in place of the original one? This question was answered in Bohm's first papers on Bohmian mechanics (Bohm 1952, Part I, Section 7, and Part II, Section 2). What would nowadays be called effects of decoherence, produced by interaction with the environment (air molecules, cosmic rays, internal microscopic degrees of freedom, etc.), make it extremely difficult for the component of the after-measurement wave function corresponding to the actual result of the measurement to develop significant overlap — in the configuration space of the very large system that includes all systems with which the original system and apparatus come into interaction — with the other components of the after-measurement wave function. But without such overlap the future evolution of the configuration of the system and apparatus is generated, to a high degree of accuracy, by that component all by itself. The replacement is thus justified as a practical matter. (See also Dürr et al. 1992, Section 5.)"

"To a high degree of accuracy"! So Goldstein says exactly the same as Demystifier (or, if you wish, Demystifier says exactly the same as Goldstein:-) ), namely: collapse is an approximation. The overlap does not disappear!
 
  • #690
akhmeteli said:
I'll try to answer your questions one at a time, otherwise I'll never be able to handle them:-)


I don't quite get it. In Bell experiments, you need correlations. You need two measurements on the entangled system (maybe you could design some measurement to measure the correlation directly in one measurement, but practical Bell experiments require two measurements). Therefore, you need to apply the Born rule twice to predict something for these measurements.
But in terms of the formalism, do you agree that you can apply the Born rule once for the amplitude of a joint state? One could consider this as an abstract representation of a pair of simultaneous measurements made at the same t-coordinate, for example. Even if the measurements were made at different times, one could assume unitary evolution for each measurement so that each measurement just creates entanglement between the particles and the measuring devices, but then apply the Born rule once to find the probability for the records of the previous measurements ('pointer states' in Bohmian lingo)
 
  • #691
akhmeteli said:
I will skip a large part of the quoteIn the same article Goldstein writes:
"The second formulation of the measurement problem, though basically equivalent to the first one, suggests an important question: Can Bohmian mechanics itself provide a coherent account of how the two dynamical rules might be reconciled? How does Bohmian mechanics justify the use of the "collapsed" wave function in place of the original one? This question was answered in Bohm's first papers on Bohmian mechanics (Bohm 1952, Part I, Section 7, and Part II, Section 2). What would nowadays be called effects of decoherence, produced by interaction with the environment (air molecules, cosmic rays, internal microscopic degrees of freedom, etc.), make it extremely difficult for the component of the after-measurement wave function corresponding to the actual result of the measurement to develop significant overlap — in the configuration space of the very large system that includes all systems with which the original system and apparatus come into interaction — with the other components of the after-measurement wave function. But without such overlap the future evolution of the configuration of the system and apparatus is generated, to a high degree of accuracy, by that component all by itself. The replacement is thus justified as a practical matter. (See also Dürr et al. 1992, Section 5.)"

"To a high degree of accuracy"! So Goldstein says exactly the same as Demystifier (or, if you wish, Demystifier says exactly the same as Goldstein:-) ), namely: collapse is an approximation. The overlap does not disappear!
But here he is talking about the assumption of a collapse followed by another measurement later. What I said before about Demystifier applies to Goldstein too:
It's also possible Demystifier would distinguish between the procedure of repeatedly applying the projection postulate for multiple measurements vs. assuming unitary evolution until the very end of a series of measurements and then applying the Born rule to find the probabilities for different possible combinations of recorded outcomes for all the previous measurements, and that he would say there are cases where Bohmian mechanics would predict slightly different statistics from the first case but not from the second case.
If you assume unitary evolution and only apply the Born rule once at the very end, then the probabilities for different final observed states should be exactly equal to the probabilities given by Bohmian mechanics + the quantum equilibrium hypothesis. See for example the beginning of section 9 where he writes:
According to the quantum formalism, the probability density for finding a system whose wave function is ψ in the configuration q is |ψ(q)|2. To the extent that the results of measurement are registered configurationally, at least potentially, it follows that the predictions of Bohmian mechanics for the results of measurement must agree with those of orthodox quantum theory (assuming the same Schrödinger equation for both) provided that it is somehow true for Bohmian mechanics that configurations are random, with distribution given by the quantum equilibrium distribution |ψ(q)|2.
Would you agree that if we assume unitary evolution and then apply the Born rule once, at the very end, the probability that this last measurement will find the system in configuration q will be exactly |ψ(q)|2? And here Goldstein is saying that according to the quantum equilibrium hypothesis, at any given time the probability that a system's full configuration has an arrangement of positions corresponding to the observable state 1 is also exactly |ψ(q)|2. He says something similar in this paper where he writes:
Bohmian mechanics is arguably the most naively obvious embedding imaginable of Schr¨ odinger’s equation into a completely coherent physical theory. It describes a world in which particles move in a highly non-Newtonian sort of way, one which may at first appear to have little to do with the spectrum of predictions of quantum mechanics. It turns out, however, that as a consequence of the defining dynamical equations of Bohmian mechanics, when a system has wave function ψ its configuration is typically random, with probability density ρ given by |ψ|2, the quantum equilibrium distribution.
In any case, I want to be clear on one point: are you really arguing that Bohmian mechanics, when used the predict statistics for observable pointer states (which it can do assuming the same dynamical equation guides particle positions at all times, with no special rule for measurement), might not predict Bell inequality violations in an experiment of the type imagined by Bell? I don't think anyone would argue that Bohmian mechanics gives "approximately" the same results as the standard QM formalism if this were the case, that would be a pretty huge difference! And note section 13 of the Stanford article where Goldstein notes that Bohmian mechanics is explicitly nonlocal--the motions of each particle depend on the instantaneous positions of every other particle in the system.
 
Last edited:
  • #692
JesseM said:
But in terms of the formalism, do you agree that you can apply the Born rule once for the amplitude of a joint state? One could consider this as an abstract representation of a pair of simultaneous measurements made at the same t-coordinate, for example. Even if the measurements were made at different times, one could assume unitary evolution for each measurement so that each measurement just creates entanglement between the particles and the measuring devices, but then apply the Born rule once to find the probability for the records of the previous measurements ('pointer states' in Bohmian lingo)

I am not quite sure. Could you write down the amplitude you have in mind? It should be relevant to the correlation, shouldn't it? As I said, maybe you can design just one measurement to measure the correlation directly (it would probably be some nonlocal measurement), but that has nothing to do with what is done in Bell experiments and, therefore, is not useful for analysis of experiments. So you can do a lot as a matter of formalism, but the issue at hand is if what we do is relevant to Bell experiments. I don't accept the procedure you offer as it has nothing to do with practical measurements, which are performed on both particles. As I said, records are not even permanent. And measurements are never final. That is the curse of unitary evolution.

Furthermore, I am not sure the Born rule can be used as anything more than an operating principle, because I don't have a clear picture of how the Born rule arises from dynamics (unitary evolution).

Let me explain my problem with the derivation for quantum theory in more detail. Say, you are performing a measurement on one particle. If we take unitary evolution seriously, the measurement cannot destroy the superposition, therefore, the probability is not zero for each sign of the measured spin projection even after the measurement. Therefore, the same is true for the second particle. So, technically, the probability should not be zero for both particles having the same spin projection? You cannot eliminate this possibility, at least not if you perform just one measurement.
 
  • #693
I skipped a large part of the quote again
JesseM said:
In any case, I want to be clear on one point: are you really arguing that Bohmian mechanics, when used the predict statistics for observable pointer states (which it can do assuming the same dynamical equation guides particle positions at all times, with no special rule for measurement), might not predict Bell inequality violations in an experiment of the type imagined by Bell?

The probability density may be the same in Bohmian and standard theory for the entire system. But nobody models the instruments in the Bell proof. So you need something more to calculate the correlation in quantum theory and prove the violations. You have two measurements in experiments (so it is sufficient to use your understanding of Goldstein and Demystifier's words on approximation: "collapse followed by another measurement later") . To get the result, you can use the projection postulate in standard quantum mechanics, or you can say in Bohmian mechanics that collapse is a good approximation there. I am not aware of any proofs that do not use tricks of this kind. So yes, I do think that if you do not use such trick, you cannot prove violations in Bohmian mechanics. If you can offer a derivation that does not use something like this, I am all ears.

JesseM said:
I don't think anyone would argue that Bohmian mechanics gives "approximately" the same results as the standard QM formalism if this were the case, that would be a pretty huge difference! And note section 13 of the Stanford article where Goldstein notes that Bohmian mechanics is explicitly nonlocal--the motions of each particle depend on the instantaneous positions of every other particle in the system.

Goldstein and Demystifier seem to say just that: collapse (part and parcel of standard quantum mechanics (SQM) so far) is just an approximation in Bohmian mechanics. So don't shoot the messenger (me:-) ). Again, if collapse were precise in Bohmian mechanics (BM), that would mean that BM contains the same internal contradictions as SQM.

And yes, Bohmian mechanics is explicitly nonlocal, but for some reason, there is no faster-than-light signaling there, for example (for the standard probability density). "My" model may have the same unitary evolution as an explicitly nonlocal theory (a quantum field theory), but it's local.
 
  • #694
akhmeteli said:
I am not quite sure. Could you write down the amplitude you have in mind?
The amplitude would depend on the experimental setup, but see the bottom section of p. 155 of this book, which says:
If we denote the basis states for Alice as \mid a_i \rangle and the basis states for Bob by \mid b_j \rangle, then the basis states for the composite system are found by taking the tensor product of the Alice and Bob basis states:

\mid \alpha_{ij} \rangle = \mid a_i \rangle \otimes \mid b_j \rangle = \mid a_i \rangle \mid b_j \rangle = \mid a_i b_j \rangle
So in an experiment where the \mid a_i \rangle basis states represent eigenstates of spin for the particle measured by Alice, and the \mid b_j \rangle basis states represent eigenstates of spin for the particle measured by Bob, you can have basis states for the composite system like \mid a_i b_j \rangle and these states will naturally be assigned some amplitude by the wavefunction of the whole system.

Also see p. 194 of this book where they say:
In contrast with the classical physics, where the state of a system is completely defined by describing the state of each of its component pieces separately, in a quantum system the state cannot always be described considering only the component pieces. For instance, the state

\frac{1}{\sqrt{2}} (\mid 00 \rangle + \mid 11 \rangle )

cannot be decomposed into separate states for each of the two bits.

akhmeteli said:
It should be relevant to the correlation, shouldn't it? As I said, maybe you can design just one measurement to measure the correlation directly (it would probably be some nonlocal measurement), but that has nothing to do with what is done in Bell experiments and, therefore, is not useful for analysis of experiments. So you can do a lot as a matter of formalism, but the issue at hand is if what we do is relevant to Bell experiments.
Suppose "as a matter of formalism" we adopt the procedure of applying unitary evolution to the whole experiment and then applying the Born rule to joint states (which includes measurement records/pointer states) at the very end. And suppose this procedure gives predictions which agree with the actual statistics we see when we examine records of experiments done in real life. Then don't we have a formalism which has a well-defined procedure for making predictions and whose predictions agree with experiment? It doesn't matter that the formalism doesn't make predictions about each individual measurement at the time it's made, as long as it makes predictions about the final results at the end of the experiment which we can compare with the actual final results (or compared with the predictions about the final results that any local realist theory would make).
akhmeteli said:
As I said, records are not even permanent. And measurements are never final.
No, but forget what you know theoretically about QM, do you agree that in real life we can write down and share the results we have found at the end of an experiment? The fact that these records may no longer exist in 3000 AD doesn't mean we can't compare the records we see now with the predictions of some formal model.
akhmeteli said:
Furthermore, I am not sure the Born rule can be used as anything more than an operating principle, because I don't have a clear picture of how the Born rule arises from dynamics (unitary evolution).
As a theoretical problem you may be interested in how it arises from dynamics, but if you just want a formal model that makes well-defined predictions that can be compared with reality, you don't need to know. That's why I keep calling it a pragmatic recipe--it doesn't need to have any theoretical elegance! All it needs to be is a procedure that always gives a prediction about the sort of quantitative results human experimenters obtain from real experiments.
akhmeteli said:
Let me explain my problem with the derivation for quantum theory in more detail. Say, you are performing a measurement on one particle. If we take unitary evolution seriously,
On a theoretical level I agree it's good to "take unitary evolution seriously", but not in terms of the pragmatic recipe. If the pragmatic recipe says to apply unitary evolution until some time T when all measurement results have been recorded, then apply the Born rule to the pointer states at time T, that's a perfectly well-defined procedure whose predictions can be compared with the actual recorded results at T, even if we have no theoretical notion of how to justify this application of the Born rule.
akhmeteli said:
the measurement cannot destroy the superposition, therefore, the probability is not zero for each sign of the measured spin projection even after the measurement. Therefore, the same is true for the second particle. So, technically, the probability should not be zero for both particles having the same spin projection?
If they are entangled in such a way that QM predicts you always get opposite spins, that would mean the amplitude for joint states \mid 11 \rangle and \mid 00 \rangle is zero. But since there is a nonzero amplitude for \mid 01 \rangle and \mid 10 \rangle, that means there's some nonzero probability for Alice to get result 1 and also some nonzero probability for her to get result 0, and likewise for Bob.
 
Last edited:
  • #695
zonde said:
You consider ensemble as statistical ensemble of completely independent members where each member possesses all the properties of ensemble as a whole, right?
Otherwise I do not understand how you can justify your statement.

What's your definition of ensemble? I just think that however you consider an ensemble, you cannot neglect the effect of one of its part on another. If you mean that a measurement is averaging over the particles beyond the subensemble or approximation, say so. In both cases the predictions of unitary evolution and projection postulate differ, hence the contradiction.
 
  • #696
DrChinese said:
1. That's what you call a contribution? I guess I have a different assessment of that. Better Bell tests will always be on the agenda and I would say Zeilinger's agreement on that represents no change in his overall direction.

Yes, this is what I call a contribution

2. I consider your comment in 1. above to be acknowledgment of the obvious, which is that it is generally agreed that Bell Inequality violations have been found in every single relevant test performed to date. "Gen-u-wine" ones at that! So you can try and misrepresent the mainstream all you want, but you are 180 degrees off.

Why don't you call it for what it is: you are part of a very small minority regarding Bell. Where's the disrespect in that? If you are confident, just call yourself a rebel and continue your research.[/QUOTE]

No, it's no acknowledgment. Is it really "generally agreed", but not by Zeilinger, Shimony and Genovese?
 
  • #697
DrChinese said:
That is a reasonable comment.

1. I am guessing that for you, entangled particles have states in common due to their earlier interaction. Further, that entangled particles are in fact discrete and are not in communication with each other in any ongoing manner. And yet, it is possible to entangle particles that have never existed in a common light cone. My point is that won't go hand in hand with any local realistic view.

Again, do entangled "particles that have never existed in a common light cone" present a loophole-free evidence of nonlocality? At last? As far as I know, nobody claimed that. Except maybe you. And entanglement exists in some form in the model.

DrChinese said:
2. EPR argued that the HUP could be beaten with entangled particles. You could learn the value of position on Alice and the momentum of Bob. And yet, a subsequent observation of Alice's momentum cannot be predicted using Bob's value. (Of course this applies to all non-commuting pairs, including spin). So EPR is wrong in that regard. That implies that the reality of Alice is somehow affected by the nature of the observation of Bob. I assume you deny this.

It is affected. But not instantaneously. At least it has not been demonstrated experimentally that the effect propagates faster than light. And again, I just don't question HUP, and HUP is valid for the model.
 
  • #698
akhmeteli said:
The probability density may be the same in Bohmian and standard theory for the entire system. But nobody models the instruments in the Bell proof.
You can include the state of the measuring device in a quantum analysis (simplifying its possible states so you don't actually consider it as composed of a vast number of particles), see this google scholar search for "state of the measuring" and "quantum".
akhmeteli said:
You have two measurements in experiments (so it is sufficient to use your understanding of Goldstein and Demystifier's words on approximation: "collapse followed by another measurement later") . To get the result, you can use the projection postulate in standard quantum mechanics
Or just include the measuring device in the quantum state, and apply the Born rule to the joint state of all the measuring devices/pointer states at some time T after the experiment is finished. Goldstein's point about the Bohmian probability being |ψ(q)|^2 means the probabilities for different joint pointer states at T should be exactly equal to the Bohmian prediction about the pointer states at T.
akhmeteli said:
or you can say in Bohmian mechanics that collapse is a good approximation there.
Huh? My understanding is that a purely Bohmian analysis of any physical situation will never make use of "collapse", it'll only find the probabilities for the particles to end up in different positions according to the quantum equilibrium hypothesis. The idea that "collapse is a good approximation" would only be used if you wanted to compare Bohmian predictions to the predictions of a QM recipe which uses the collapse assumption, but if you were just interested in what Bohmian mechanics predicted, you would have no need for anything but the Bohmian guiding equation which tells you how particle positions evolve.
akhmeteli said:
I am not aware of any proofs that do not use tricks of this kind.
OK, but have you actually studied the math of Bohmian mechanics and looked at how it makes predictions about any experiments, let alone Bell-type experiments? I haven't myself, but from what I've read I'm pretty sure that no purely Bohmian derivation of predictions would need to make use of any "trick" involving collapse.
akhmeteli said:
So yes, I do think that if you do not use such trick, you cannot prove violations in Bohmian mechanics. If you can offer a derivation that does not use something like this, I am all ears.
Well, take a look at section 7.5 of Bohm's book The Undivided Universe, which is titled "The EPR experiment according to the causal interpretation" (another name for Bohmian mechanics), which can be read in its entirety on google books here. Do you see any mention of a collapse assumption there?
akhmeteli said:
Goldstein and Demystifier seem to say just that: collapse (part and parcel of standard quantum mechanics (SQM) so far) is just an approximation in Bohmian mechanics. So don't shoot the messenger (me:-) ).
But Goldstein also says that the probabilities predicted by Bohmian mechanics are just the same as those predicted by QM. Again, I think the seeming inconsistency is probably resolved if by assuming that when he talks of agreement he's talking of a single application of the Born rule to a quantum system which has been evolving in a unitary way, whereas when he talks about "approximation" he's talking about a repeated sequence of unitary evolution, projection onto an eigenstate by measurement, unitary evolution starting again from that eigenstate, another projection, etc.
 
  • #699
GeorgCantor said:
Do you know of a totally 100% loophole-free experiement from anywhere in the universe?


akhmeteli said:
I can just repeat what I said several times: for some mysterious reason, Shimony is not quite happy about experimental demonstration of violations, Zeilinger is not quite happy... You are quite happy with it? I am happy for you. But that's no reason for me to be happy about that demonstration. Again, the burden of proof is extremely high for such radical ideas as elimination of local realism.


Yes, they aren't quite happy yet. The departure from the old concepts is just too great.

This isn't much different than Darwin's TOE in the mid-nineteen century. Not everyone would immediately recognize the evidence(no matter what), for the idea of a fish turning into a human being was just too radical, as you are saying about local realism. The TOE turned the world upside down, but we made do with it. Controversial or not, the theory of evolution is here to stay and so is the death of classical realism.
 
  • #700
GeorgCantor said:
Controversial or not, the theory of evolution is here to stay and so is the death of classical realism.

Agree! :approve:

(Very good answer GC!)
 
Back
Top