Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #1,321
DevilsAvocado said:
... trying to talk reasonable to Bill is a waste of time. He lives in his own little bubble; firmly convinced he represents the "universe", when the fact is that he’s totally lost and totally alone in his "reasoning".
DA, I don't know if you know this, but billschnieder is a working scientist. I don't think that either DrC or JesseM are.

I admire your honest efforts to understand the conundra surrounding Bell's theorem. I think that billschnieder, and JesseM, and DrC, and all of us are interested in understanding this stuff. And, honestly, I don't think that any of us have a definitive way of expressing anything about the nature of reality.

billschnieder's expertise and knowledge exceeds yours, and I think you should take that into account, just as you apparently do wrt JesseM and DrC.

These are not easy considerations. If they were, then notable physicists and mathematicians wouldn't still be arguing about them. And, while I appreciate your input and your apparent interest, I think you should focus on the precise arguments being made. I'm not sure they're good arguments. Maybe you can sort it out, and clarify it, for all of us. But, please, focus on the the arguments. They're there to be refuted. So, refute them, or agree with them, or just say that you don't understand them -- and ask some questions. But, please, you and nismaratwork, stop with the 'fanboy' stuff.
 
Physics news on Phys.org
  • #1,322
RUTA said:
I'm with DrC, I also don't believe "the Moon is there when nobody looks." By "when nobody looks" I mean "when not interacting with anything."
And when is it ever the case that something is not interacting with anything?

RUTA said:
In Relational Blockworld, if the entity "isn't there," i.e., is "screened off," it doesn't exist at all. So, the answer to your question is that there is no Moon to wonder.
Come on RUTA, are you saying that your Relational Blockworld is a description of the physical reality?
 
Last edited:
  • #1,323
ThomasT said:
And when is it ever the case that something is not interacting with anything?

When it exhibits wave-like behavior. Once it interacts with its environment, it acquires definite position (particle-like behavior) per decoherence.

RBW is not the only interpretation in which "non-interacting" means "non-existent." I got that idea from Bohr, Ulfbeck and Mottelson. Zeilinger has also been credited with that claim regarding photons.

ThomasT said:
Come on RUTA, are you saying that your Relational Blockworld is a description of the physical reality?

Absolutely, RBW is an ontological interpretation of QM. What in particular strikes you as unreasonable about this ontology? The non-existence of non-interacting entities (manifested as nonseparability of the experimental equipment)? Or, blockworld?
 
  • #1,324
billschnieder said:
I'm done with this rubbish!

I can only hope...

An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!
 
  • #1,325
DrChinese said:
An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!
Are you familiar with GHZ experiments at all?

Anyways
from this paper - http://prl.aps.org/abstract/PRL/v91/i18/e180401"

"In conclusion, we have demonstrated the statistical and nonstatistical conflicts between QM and LR in fourphoton GHZ entanglement. However, it is worth noting that, as for all existing photonic tests of LR, we also had to invoke the fair sampling hypothesis due to the very low detection efficiency in our experiment."

Guess what? Fair sampling hypothesis does not quite hang together with ensemble interpretation.
 
Last edited by a moderator:
  • #1,326
DrChinese said:
I can only hope...

An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!

Not many places where the use of locality in the usual derivation of GHZ is explicitly explained: do you know any, some, many?
I am talking about Mermin's version using 3 particles, not the original 4 particles configuration.

The use of realism is obvious, of course.
(I will not tell who (among the great expert) thought that locality was not used in GHZ.)

By the way, GHZ is often called: Bell Theorem without Inequality (I mentioned that before as one reason why one should not equate Bell Inequalities
(a form of Boole's inequalities, as pointed out long ago by Itamar Pitowsky in several papers getting deeper and deeper into that matter - this is related to earlier work by Fine) and Bell Theorem as was claimed in a link related to a dispute around billschnieder.
 
  • #1,327
zonde said:
Are you familiar with GHZ experiments at all?

Anyways
from this paper - http://prl.aps.org/abstract/PRL/v91/i18/e180401"

"In conclusion, we have demonstrated the statistical and nonstatistical conflicts between QM and LR in fourphoton GHZ entanglement. However, it is worth noting that, as for all existing photonic tests of LR, we also had to invoke the fair sampling hypothesis due to the very low detection efficiency in our experiment."

Guess what? Fair sampling hypothesis does not quite hang together with ensemble interpretation.

What does Fair Sampling have to do with my comment? If I predict a -1 every time, and you predict +1 every time, and it always comes up -1... Then it doesn't really much matter how often that occurs.

As I have said a million times :smile: all science involves the fair sampling assumption. There is nothing special about GHZ or Bell tests in that regard.

And as I have also said too many times to count: if the GHZ result is due to some unknown weird bias... what is the dataset we are sampling that produces such a result? I would truly LOVE to see you present that one! Let's see:

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...
Actual sample=Oops!
 
Last edited by a moderator:
  • #1,328
Actually, I had the predictions of LR and QM reversed in my little sample. It should be more like:

QM=+1, +1, +1, ...
LR=-1, -1, -1, ...

See this article from Zeilinger and Pan:

Multi-Photon Entanglement and Quantum Non-Locality (2002)

"Comparing the results in Fig. 16.7, we therefore conclude that our experimental results verify the quantum prediction while they contradict the local-realism prediction by over 8standard deviations; there is no local hidden-variable model which is capable of describing our experimental results."
 
  • #1,329
ThomasT said:
DA, I don't know if you know this, but billschnieder is a working scientist. I don't think that either DrC or JesseM are.
In what field? Search arxiv.org for author "Bill Schnieder" turns up no results. Likewise, a general google search for "Bill Schnieder" and "abstract" doesn't seem to turn up any papers with abstracts (and most papers these days have at least the abstracts online). Can you link to any work by him?
 
  • #1,330
billschnieder said:
So you you are saying if you were not exclusing "finite frequentism" you will be able to give an answer?
Only if your list was understood as a statistical sample (either a sample drawn from a larger population, or the results of a series of trials), or if you add some conditions like that an experimenter is picking a sample from the "population" represented by the list. In the first case we could use finite frequentism to give probabilities, in the second case we could even use "limit frequentism" if we added the condition that the experimenter was picking randomly using a method that was equally probable (in the limit frequentist sense) to give any entry on the list.

On the other hand, I've never heard of any authority on statistics talking about determining a "probability" from an "abstract list" which is not interpreted as either a sample or a population. If you want to continue with the "abstract list" argument, please find a source where some authority does something like this.
billschnieder said:
So, you are effectively picking and triming your definition of probability for argumentative purposes as more of your statements will show below.
I had already stated clearly that I was only interested in talking about probabilities defined in the "frequency in limit as number of trials/sample size goes to infinity", since I think these are the only types of probabilities relevant to Bell's derivation. Again, are you willing to at least consider whether Bell's derivation might make sense (and not have the problems of limited applicability you argue for) when his probabilities are interpreted in these terms, or are you basically refusing to consider the possibility of an interpretation of the paper different from your own, suggesting you are not really interested in trying to understand Bell's argument in its own terms but just in making a lawyer-like rhetorical case against him?
JesseM said:
Does your list of four give us enough information to know the frequency of ++ in the limit as the sample size goes to infinity?
billschnieder said:
Bah! This list is the entire context of the question! The list is the population.
You didn't specify that when you first posted the list, and given that all your previous examples of lists of + and -'s involved a series of trials from a run of a given experiment, there was no reason for me to think the list was intended to be something totally different.
billschnieder said:
True probability of the (++) in the list, is the relative frequency of (++) in the list. This is the frequentist approach, which you now want to abandon in order to stay afloat.
This is just a ridiculous criticism, Bill. I have always been using what I now call the "limit frequentist approach" to avoid your quibbling about finite frequentism (most scientists nowadays also just talk about 'frequentism' when they mean limit frequentism), you can see that in every post where I talked about frequentism I explained I was talking about the limit as the number of trials approached infinity. Go on, find a single post of mine where my own use of probability involved anything other than limit frequentist probabilities; you won't be able to, showing that your "you now want to abandon" comment is either based on totally misreading what I've been saying all along, or knowingly misrepresenting it.
billschnieder said:
Hehe! Do you know of anybody who has ever performed an infinite number of coin or die tosses? I think not. So you can not know what the limit will be as the number of tosses tends toward infinity.
No, you can never know with absolute certainty what the "limit frequentist" probabilities are, but you can have a high degree of confidence that they are close to some value based on both theoretical arguments (like the symmetry of fair coins) and empirical averages with large numbers of trials. In any case, Bell's derivation does not require us to actually know what the limit frequentist probabilities of anything are, it just assumes they have some objective values (encapsulated in a function like ρ(λ)) and that these objective values have certain properties (like ρ(λ) being independent of the detector settings), and derives inequalities for the expectation values (themselves just weighted sums of objective probabilities for different combinations of results) based on that. If all of Bell's theoretical assumptions about the objective probabilities were correct, then given the law of large numbers it should be very unlikely that the empirical averages for an experiment with a great many trials would violate the inequality if the "true" expectation values (determined by limit frequentist probabilities) obey it.
billschnieder said:
Furthermore, did you really think I will not notice the fact that you have now abandoned your favorite frequentist approach and now you are using the bayesian approach (see underlined text above) to decide that the P(Heads) = 0.5.
No. First of all, I'm not saying that the P(heads) is actually guaranteed to equal 0.5, just that it's physically plausible that it would be--that's my hypothesis about the true probability, as distinguished from the true probability itself. A theorist who uses limit frequentist definitions when making theoretical arguments about probabilities (like Bell's) is free to use Bayesian methods when trying to come up with an empirical estimate about what the objective probabilities are. But a Bayesian would say the "probability" is just your best estimate (a more 'subjective' definition of the meaning of probability), while a limit frequentist would distinguish between the estimate and the "true" probability.

Second of all, I'm not using the symmetry of the sample space as a basis for my estimate that P(heads)=0.5, I'm using the actual physical symmetry of the coin itself. If I had an irregular coin with more weight on one side than the other I wouldn't make this estimate, despite the fact that the sample space still contains only two possible outcomes so a Bayesian (or Jaynesian) might say the principle of indifference demands our prior distribution assign each outcome an equal probability.

In any case Bell's derivation does not require any estimates of the true limit frequentist probabilities given by ρ(λ). Only once we have derived the inequality do we have to worry about empirical measurements, and here a limit frequentist can just argue that by the law of large numbers, our sample averages are unlikely to differ significantly from the "true" expectation values (determined by the 'true' limit frequentist probabilities) if the number of trials is large enough.
billschnieder said:
I'm sure if I looked, I will not need to look hard to find a post in which you wrote a list not very different from mine and also wrote P(++) to be 1/4 or similar, without having performed an infinite number of damned "trials".
Nope, you won't be able to, I have been quite consistent about understanding probabilities in terms of the limit frequentist approach, since some of my earliest discussions with you--for example see post #91 from the 'Understanding Bell's Logic' thread, posted back in June, where I said:
It's still not clear what you mean by "the marginal probability of successful treatment". Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity (this is the frequentist interpretation of probability, its major rival being the Bayesian interpretation--see the article Frequentists and Bayesians) ... Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?) If so, perhaps you'd better give me your own definition of what you even mean by the word "probability", if you're not using the frequentist interpretation that I use.
Even earlier than my discussions with you, in January of 2009 I explained to a different poster that I understood derivations of Bell inequalities to involve frequentist probabilities defined in the limit as the number of trials goes to infinity, see this post:
I didn't say anything about you knowing the objective facts. Again, the frequentist idea is to imagine a God's-eye perspective of all the facts, and knowing the causal relations between the facts, figure out what the statistics would look like for a very large number of trials.

...

If you believe there are objective facts in each trial, even if you don't know them, then it should be possible to map any statement about subjective probabilities into a statement about what this imaginary godlike observer would see in the statistics over many trials--do you disagree? For example, suppose there is an urn with two red balls and one white ball, and the experiment on each trial is to pick two balls in succession (without replacing the first one before picking the second), and noting the color of each one. If I open my hand and see that the first one I picked was red, and then I look at the closed fist containing the other and guess if it'll be red or white, do you agree that I should conclude P(second will be white | first was red) = 1/2? If you agree, then it shouldn't be too hard to understand how this can be mapped directly to a statement about the statistics as seen by the imaginary godlike observer. On each trial, this imaginary observer already knows the color of the ball in my fist before I open it, of course. However, if this observer looks at a near-infinite number of trials of this kind, and then looks at the subset of all these trials where I saw that the first ball was red, do you agree that within this subset, on about half these trials it'll be true that the ball in my other hand was white? (and that by the law of large numbers, as the number of trials goes to infinity the ratio should approach precisely 1/2?)

If you agree with both these statements, then it shouldn't be hard to see how any statement about subjective probabilities in an objective universe should be mappable to a statement about the statistics seen by a hypothetical godlike observer in a large number of trials. If you think there could be any exceptions--objectively true statements of probability which cannot be mapped in this way--then please give an example. It would be pretty earth-shattering if you could, because the frequentist interpretation of probabilities is very mainstream, I'm sure you could find explanations of probability in terms of the statistics over many trials in virtually any introductory statistics textbook.
So I think it's safe to say I have been quite consistent in my understanding of what "probability" means in the context of Bell's derivation, and any notion of yours that I've been waffling is just another example of your leaping to an uncharitable conclusion when you see any ambiguity in the way I have expressed myself.
billschnieder said:
You say it is impossible to calculate an answer, then when I give you the answer, you then say the answer is wrong. How do you know it is wrong, if you are unable to calculate the correct one?
What I said in my original response (post #1249) was "No, you can't calculate the probability just from the information provided, not if we are talking about objective frequentist probabilities rather than subjective estimates." If we have some additional information about what the list represents, like that it is a population and we have an experimenter picking a random sample from the population (using a method that we are told has an equal probability of picking any of the four entries, with 'probability' understood in the limit frequentist sense), then we can certainly calculate the probability. I already made this point in post #1277:
Again, you said nothing about "randomly picking" from a list, you just gave a list itself and asked for the probabilities of one entry on that list. If you want to add a new condition about "randomly picking", with "randomly" meaning that you have an equal limit frequentist probability of picking any of the four entries on the list, then in that case of course I agree that P(++)=1/4...well duuuuh! But that wasn't the question you asked.
Now, can we get back to discussing Bell's theorem, and not some silly irrelevant example you came up with to prove I "don't understand probability"?
billschnieder said:
JesseM said:
JesseM said:
Note that the wikipedia article says "close to the expected value", not "exactly equal to the expected value".
JesseM said:
An "expectation value" like E(a,b) would be interpreted in frequentist terms as the expected average result in the limit as the number of trials (on a run with detector settings a,b) goes to infinity
Um, how do you figure? The two statements of mine are entirely compatible, obviously you are misunderstanding something here
It is quite clear from the two statements that if average from the law of large numbers is close to but not equal to the true expectation value, it can not be the definition of the expectation value! Which one is it? The definition of the expectation value can not at the same time be only approximately equal to it!?
I don't understand the phrase "average from the law of large numbers". The average from any finite number of trials N can be different from the true expectation value, no matter how large of a finite number N we pick. However, the law of large numbers says that in the limit as N approaches infinity, the average approaches the expectation value with probability 1. Another way of putting this is that if we pick some specific real number epsilon between 0 and 1, then no matter how small of an epsilon we pick, the probability that the empirical average (the 'sample mean') differs from the expectation value by an amount greater than or equal to epsilon should become smaller and smaller with greater values of N, approaching 0 in the limit as N approaches infinity. If you're familiar with the official calculus definition of a "limit" in terms of the "epsilon-delta" definition (see here), this should look pretty familiar.
JesseM said:
billschnieder said:
You can visualize it by thinking that if you would randomly pick an entry from the the list I gave you
Well, that's an entirely separate question, because then you are dealing with a process that can repeatedly pick entries "randomly" from the list for an arbitrarily large number of trials. But you didn't say anything about picking randomly from the list, you just presented a list of results and asked what P(++) was.
billschnieder said:
It is not an entirely separate question.
It is because my original objection was that your problem didn't give enough "information" for any definite answer, and here you are providing more information (the idea that we are randomly picking entries from the list and want to know the probability of picking a given entry).
billschnieder said:
I did not mention any trials in my question. But you have stated that the only notion of probability you want to use is the "limit frequentist probability", even though initially you just said "frequentist", but if you want to stick to that limited approach, which is only interested in "trials", you could still have provided an answer to the question by imagining what the limit will be if you actually randomly picked items from my list. Is it your claim that this is also impossible?
No, I already told you at the end of post #1277 that this was fine, although you would have to specify that you were picking in a way that gave an equal probability (in limit frequentist terms) of selecting any of the four items on the list, since it's perfectly possible to conceive a method of selection that would make some entries on the list more probable than others (for example, start at the top of the list, if it's 'heads' pick the top entry and if it's 'tails' move to the next entry and repeat this procedure until you either get a heads or get to the last entry on the list...this method gives a probability of 1/2 for picking the first entry, 1/4 for picking the second, 1/8 for picking the third and 1/8 for picking the fourth).
billschnieder said:
Secondly, despite my repeated correction of your false statements that I presented "results" or "trials", you keep saying it. You quickly jumped to claim I never mentioned trials, yet in the next sentence, you say I presented "results", even though I never characterized the list as such, and corrected your attempts to characterize it as such multiple times! You are not being honest.
Yes bill, every time I colloquially use a word like "result" in a way that could possibly be interpreted as a mischaracterization of something you have said, it proves I am a devious snake who is "not being honest" rather than just that I am an ordinary human who sometimes speaks a bit sloppily. Here I did not intend "result" to explicitly mean the results of a series of trials, it could be any list of data (including a list representing a population of possible 'results' that an experimenter might get when picking randomly from the population)
 
Last edited:
  • #1,331
charlylebeaugosse said:
BTW: Someone wrote about the need of mathematical physicists in order to solve any big problem. What have they brought to physics that is acknowledged by the rest of the physicists? I have great respect for them, some of the best ones are my friends, but their contributions are more considered as math. There is a funny story about Simon and Feynman where RF asked BS "who are you young man" to which "BS" answered "I am BS", to which it was replied: and "what is your field?". adn BS comments: can you imagine that F did not know about my work? i.e., for me: BS did not even understand that RF couldn't care less about the type of things he was doing.

I hope that mathematical Physicist will have some recognition as physicists some day. Some of them have deep physical intuition beside tremendous technical power, but so far, ...

Is that supposed to be funny? Modern theoretical physics is mainly mathematical physics, in fact it's been that way for a century or so, the last great achievements by non-mathematicians was probably back in Faraday's time.

The foundations of QM have been debated for nearly a century by many great thinkers, and the conclusion is that nothing will get resolved by "word" arguments about interpretations, there needs to be a model to back up the argument and that model has to be in the language of mathematics.

Of course we need experimental results from which to check our models, and in relation to the question of this thread we have Bell experiments of Aspect et al, GHZ and delayed choice erasure experiments all of which suggest non-locality unless you are a deluded person who thinks a classical explanation makes sense. (The other explanations in terms of reinterpreting reality may have their time, but let's give the physics a chance before opening the gates for the philosophical hordes)

The most promising current model that might account for non-locality seems to be the Holographic Principle, but to properly understand that you need to understand its origins in the work of Bekenstein and Hawking in the 70s on Black Hole Thermodynamics, then you need to understand how it works with current models in String Theory, LQG etc.

This is difficult stuff, with a heavy dose of mathematical formalism. It is the arena where the useful debate about understanding the universe is taking place, not the pseudo philosophical word-play that goes on in these forums.

If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:
This is an interesting question, to which there is no definite answer. On the one hand, since it was possible to quantize the other three interactions without changing the interpretation of QM, it is not obvious that one should not be able to do the same for gravity. If we restrict to `laboratory’ experiments with gravity (even building black holes in the lab), there is no sharp paradox that would require us to modify QM. QM makes us queasy, but if it gives consistent predictions for all processes we may just have to live with that. Things are much less clear when you get to cosmology. Chaotic inflation, for example, does seem to lead to paradoxes, which might be the clue to a deeper understanding of QM

Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)
 
  • #1,332
JesseM said:
Nope, you won't be able to, ...
It took me 2 minutes to find this, and it looks worse than I had thought. Pay attention to how you characterized Bell's expectation value. Also pay attention to how you are factorizing ρ(λ), within the summation. There is no escape.

JesseM said:
When scratched, any given box will reveal either a cherry or a lemon. Once Alice and Bob have both found the fruit behind the box they choose, they can adopt the convention that a cherry is represented by a +1 and a lemon is represented by a -1, and multiply their respective numbers together to produce a single number for each trial (and that single number will itself be +1 if they both got the same fruit, and -1 if they got different fruits). Then we are interested in the "expectation value" for a given choice of boxes--for example, E(a,b') means the average result Alice and Bob will get after multiplying their numbers together on the subset of trials where Alice chose to scratch box a and Bob chose to scratch box b'. The CHSH inequality then states that if we define the value S by S=E(a,b) - E(a,b') + E(a',b) + E(a',b'), then -2 \leq S \leq 2.

As for the hidden states, there are 16 different possibilities (and here I am replacing each fruit with the number they've chosen to represent it, so a=+1 means that the hidden fruit in box a on Alice's card is a cherry):

1: a=+1, a'=+1, b=+1, b'=+1
2: a=+1, a'=+1, b=+1, b'=-1
3: a=+1, a'=+1, b=-1, b'=+1
4: a=+1, a'=+1, b=-1, b'=-1
5: a=+1, a'=-1, b=+1, b'=+1
6: a=+1, a'=-1, b=+1, b'=-1
7: a=+1, a'=-1, b=-1, b'=+1
8: a=+1, a'=-1, b=-1, b'=-1
9: a=-1, a'=+1, b=+1, b'=+1
10: a=-1, a'=+1, b=+1, b'=-1
11: a=-1, a'=+1, b=-1, b'=+1
12: a=-1, a'=+1, b=-1, b'=-1
13: a=-1, a'=-1, b=+1, b'=+1
14: a=-1, a'=-1, b=+1, b'=-1
15: a=-1, a'=-1, b=-1, b'=+1
16: a=-1, a'=-1, b=-1, b'=-1

In this case, define something like A(a,12) to mean "the value Alice gets if she picks a and the hidden state of the two cards is 12", so going by the above we'd have A(a,12)=-1. Similarly B(b',7)=+1, and so forth. And we can assume that there must be well-defined probabilities for each of the possible hidden states, which can be represented with notation like p(8) and p(15) etc.

Since the expectation value E(a,b) is the average value Alice and Bob get when they multiply their results together in the subset of trials where Alice picks box a and Bob picks box b, we should have: E(a,b) = \sum_{N=1}^{16} A(a,N)*B(b,N)*p(N). Likewise, we should also have E(a,b') = \sum_{N=1}^{16} A(a,N)*B(b',N)*p(N). Combining these gives:

E(a,b) - E(a,b') = \sum_{N=1}^{16} [A(a,N)*B(b,N) - A(a,N)*B(b',N)]*p(N)

With a little creative algebra you can see the above can be rewritten as:

E(a,b) - E(a,b')
.
\ = \sum_{N=1}^{16} A(a,N)*B(b,N)*[1 \pm A(a',N)*B(b',N)]*p(N)
.
\ - \sum_{N=1}^{16} A(a,N)*B(b',N)*[1 \pm A(a',N)*B(b,N)]*p(N)
 
Last edited by a moderator:
  • #1,333
unusualname said:
If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:


Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)

Typical response about QM from someone working in unification. I've received similar responses from Witten and Ashtekar (and Smolin in 2002, but his 2006 book shows he's given it more thought since). They're buried in the technical problems associated with the pursuit of a different beast -- unification of the forces and/or quantization of gravity. We need people working on all fronts, but the fronts are too big for anyone person to master them all. Likewise, you might ask the author of a particular interpretation of QM how it bears on unification and receive an equally vague answer. In general, both camps (unification and foundations) agree their problems have a common resolution, they're just working on that resolution from different directions.
 
  • #1,334
RUTA said:
Typical response about QM from someone working in unification. I've received similar responses from Witten and Ashtekar (and Smolin in 2002, but his 2006 book shows he's given it more thought since). They're buried in the technical problems associated with the pursuit of a different beast -- unification of the forces and/or quantization of gravity. We need people working on all fronts, but the fronts are too big for anyone person to master them all. Likewise, you might ask the author of a particular interpretation of QM how it bears on unification and receive an equally vague answer. In general, both camps (unification and foundations) agree their problems have a common resolution, they're just working on that resolution from different directions.

I would think the resolution to QM interpretation will fall out rather easily once the "unification" people hit on the correct microscopic description of reality. I can't see how there could be much useful input the other way.

Of course, once it's all resolved someone will point to a passage in Kant which explained it all hundreds of years ago. :rolleyes:
 
  • #1,335
billschnieder said:
According to you, the wikipedia article is wrong.
No, just that it was failing to adequately distinguish between two notions of the "mean" which could lead to certain readers (you) becoming confused. There weren't any statements that were clearly incorrect.
billschnieder said:
Why don't you correct it.
Your wish is my command. I have edited the opening section of the article to more clearly distinguish between the "sample mean" and the "population mean", and make clear that the expected value is equal to the population mean, not the sample mean:
For a data set, the mean is the sum of the values divided by the number of values. The mean of a set of numbers x1, x2, ..., xn is typically denoted by \bar{x}, pronounced "x bar". This mean is a type of arithmetic mean. If the data set was based on a series of observations obtained by sampling a statistical population, this mean is termed the "sample mean" to distinguish it from the "population mean". The mean is often quoted along with the standard deviation: the mean describes the central location of the data, and the standard deviation describes the spread. An alternative measure of dispersion is the mean deviation, equivalent to the average absolute deviation from the mean. It is less sensitive to outliers, but less mathematically tractable.

If a series of observations is sampled from a larger population (measuring the heights of a sample of adults drawn from the entire world population, for example), or from a probability distribution which gives the probabilities of each possible result, then the larger population or probability distribution can be used to construct a "population mean", which is also the expected value for a sample drawn from this population or probability distribution. For a finite population, this would simply be the arithmetic mean of the given property for every member of the population. For a probability distribution, this would be a sum or integral over every possible value weighted by the probability of that value. It is a universal convention to represent the population mean by the symbol μ.[1] In the case of a discrete probability distribution, the mean of a discrete random variable x is given by taking the product of each possible value of x and its probability P(x), and then adding all these products together, giving \mu = \sum x P(x).[2]

The sample mean may be different than the population mean, especially for small samples, but the law of large numbers dictates that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.[3]
As an experiment, let's now see if anyone edits it on the ground that it's incorrect (as opposed to edits for stylistic or other reasons). No fair editing it yourself!
billschnieder said:
It is obvious you are the one who is way off base and you know it.
So, you wish to completely ignore the quotes from various statistics texts I provided? You trust a user-edited site like wikipedia over published texts? Here they are again:
JesseM said:
(edit: See for example this book which distinguishes the 'sample mean' \bar X from the 'population mean' \mu, and says the sample mean 'may, or may not, be an accurate estimation of the true population mean \mu. Estimates from small samples are especially likely to be inaccurate, simply by chance.' You might also look at this book which says 'We use \mu, the symbol for the mean of a probability distribution, for the population mean', or this book which says 'The mean of a discrete probability distribution is simply a weighted average (discussed in Chapter 4) calculated using the following formula: \mu = \sum_{i=1}^n x_i P[x_i ]').
billschnieder said:
All the grandstanding is just a way to stay afloat, not a serious argument agains the well accepted meaning of expectation value.

Wikipedia: http://en.wikipedia.org/wiki/MeanWikipedia: http://en.wikipedia.org/wiki/Expected_value
Neither of these sources claim that the expected value is equal to the "sample mean" (i.e. the average of the results obtained on a series of trials), which is what I thought you were claiming when you said:
billschnieder said:
You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product.

...

Wow! The correct answer is <xy>
Of course if the "theoretical list" is supposed to represent a population rather than results from a series of trials, and we assume we are picking randomly from the population using a method that has an equal probability of returning any member from the list, in that case I would agree the answer is <xy>. But once again your statement of the problem didn't provide enough information, because the list could equally well be interpreted as a sample, and in that case the expectation value for the paired product would not necessarily be equal to <xy> since <xy> would just be the sample mean--do you disagree?
JesseM said:
Again, you said nothing about "randomly picking" from a list, you just gave a list itself and asked for the probabilities of one entry on that list.
billschnieder said:
Yes, that is exactly what I did, and you answered that it was impossible to do because you wanted to use ONLY a probability approach that involved "trials".
No I didn't, I just said not enough information was provided. If you specify that the list is intended to be a population and we are picking randomly from the population, that's A-OK with me. I already told you this was fine with me at the end of post #1277.
billschnieder said:
You do the same thing for dice and coins and you have done the same thing in you famous scratch-lotto examples
In the scratch lotto example I explicitly specified that on each trial the experimenters were picking a box at random to scratch, and at some point I bet I even pedantically specified that "at random" means "equal probability of any of the three boxes". With coins and dice it's generally an implicit assumption that each result is equally probable unless the coin/die is specified to be weighted or something.
JesseM said:
Well, excuse me for thinking your question was supposed to have some relation to the topic we were discussing, namely Bell's theorem.
billschnieder said:
While discussing Bell's INEQUALITIES, Not Bell's theorem which we haven't discussed at all
Bell's theorem is just that Bell's inequalities must be obeyed in any local hidden variables theory, and since QM theoretically predicts they will be violated in some circumstances, QM is theoretically incompatible with local hidden variables. Anyway, if you want to be pedantic we're discussing the entirety of Bell's derivation of the inequalities, and whether an analysis of the derivation implies that the inequality is only applicable under some limited circumstances (like it only being applicable to data where it is possible to "resort" in the manner you suggested). My claim is that the correct interpretation of the probabilities in Bell's derivation is that they were meant to be "limit frequentist" probabilities, and that if you look at the derivation with this interpretation in mind it all makes sense, and it shows the final inequalities do not have the sort of limited applicability you claim.
billschnieder said:
and continue to claim that Bell's equation (2) is not a standard mathematical definition for the expectation value of a paired product.
Nope, it's not. The standard mathematical definition for the expectation value of some variable x (whether it is obtained by taking a product of two other random variables A and B or in some other way) is just a sum or integral over all possible values of x weighted by their probabilities or probability densities, i.e. either \mu = \sum_{i=1}^N x_i P(x_i) or \int x \rho(x) \, dx. You can see that this standard expression for the expectation value involves no variables besides x itself. Now depending on the nature of the specific situation we are considering, it may be that functions like P(x) or ρ(x) can themselves be shown to be equal to some functions of other variables, and this is exactly where Bell's equation (2) comes from. Here, I'll give a derivation:

If x is the product of the two measurement results A and B with detector settings a and b, then according to what I said above the "standard form" for the expectation value should be \mu = \sum_{i=1}^N x_i P(x_i), and since we know that this is an expectation value for a certain pair of detector angles a and b, and that the two measurement results A and B are themselves always equal to +1 or -1, this can be rewritten as:

(+1)*P(x=+1|a,b) + (-1)*P(x=-1|a,b) = (+1)*[P(A=+1, B=+1|a,b) + P(A=-1, B=-1|a,b)] + (-1)*[P(A=+1, B=-1|a,b) + P(A=-1, B=+1|a,b)]

Then in that last expression, each term like P(A=+1, B=+1|a,b) can be rewritten as P(A=+1, B=+1, a, b)/P(a,b). So by marginalization (and assuming for convenience that λ is discrete rather than continuous), we have:

P(A=+1, B=+1|a,b) = \sum_{i=1}^N \frac{P(A=+1, B=+1, a, b, \lambda_i )}{P(a,b)}

And P(A=+1, B=+1, a, b, λi) = P(A=+1, B=+1|a, b, λi)*P(a, b, λi) = P(A=+1, B=+1|a, b, λi)*P(λi | a, b)*P(a,b), so substituting into the above sum gives:

P(A=+1, B=+1|a,b) = \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i | a, b)

And if we make the physical assumption that P(λi | a, b) = P(λi) (the no-conspiracy assumption which says the probability of different values of hidden variables is independent of the detector settings), this reduces to

P(A=+1, B=+1|a,b) = \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i )

Earlier I showed that the expectation value, written in its standard form, could be shown in this scenario to be equal to the expression

(+1)*[P(A=+1, B=+1|a,b) + P(A=-1, B=-1|a,b)] + (-1)*[P(A=+1, B=-1|a,b) + P(A=-1, B=+1|a,b)]

So, we can rewrite that as

(+1)*[ \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i ) + \sum_{i=1}^N P(A=-1, B=-1 | a, b, \lambda_i )*P(\lambda_i )]+ (-1)*[ \sum_{i=1}^N P(A=+1, B=-1 | a, b, \lambda_i )*P(\lambda_i ) + \sum_{i=1}^N P(A=-1, B=+1 | a, b, \lambda_i )*P(\lambda_i )]

Or as a single sum:

\sum_{i=1}^N P(λi) * [(+1*+1)*P(A=+1, B=+1|a,b,λi) + (-1*-1)*P(A=-1, B=-1|a,b,λi) + (+1*-1)*P(A=+1, B=-1|a,b,λi) + (-1*+1)*P(A=-1, B=+1|a,b,λi)]

And naturally if the value of a along with the specific choice of λi completely determine the value of A, and likewise the value of b along with the specific choice of λi completely determines the value of B (another physical assumption), then for any given i in the sum above, three of the conditional probabilities will be 0 and the other will be 1, so it's not hard to see (tell me if you want this step explained further) why the above can be reduced to:

\sum_{i=1}^N A(a,\lambda_i ) B(b, \lambda_i ) P(\lambda_i )

...which is just the discrete form of Bell's equation (2). So, hopefully you require no further proof that although Bell's equation (2) gives one form of the expectation value, it was not meant to contradict the idea that the expectation value can also be written in the standard form:

(+1)*P(product of A and B is +1) + (-1)*P(product of A and B is -1)

...which given the knowledge that both A and B are always either +1 or -1, and A is the result for the detector with setting a while B is the result for the detector with setting b, can be written as:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

...which is the equation I have been bringing up over and over. Last time I brought it up, you responded in post #1275 with:
False! The above equation does not appear in Bell's work and is not the expectation value he is calculating in equation (2).
Hopefully the above derivation shows you why Bell's equation (2) is entirely consistent with the above "standard form" of the expectation value, given the physical assumptions he was making. If you still don't agree, please show me the specific step in my derivation that you think is incorrect.
billschnieder said:
Oh so now you are saying if given a population from which you can easiliy calculate relative frequencies, you will still not be able to use your favorite "limit frequentist" approach to obtain estimates of true probabilities because the process used to sample the population might not be fair. Wow! You have really outdone yourself. If the "limit frequentist" approach is this useless, how come you stick to it, if not just for argumentation purposes?
It's useful in theoretical proofs involving probabilities, such as the derivation of the conclusion that Bell's inequality should apply to the "limit frequentist" expectation values in any local realist universe. And for experimental data, as long as the sample size is large we can use empirical frequencies to estimate a range for the limit frequentist probabilities with any desired degree of confidence, even though we can never be 100% confident the true limit frequentist probability lies in that range (but that's just science for you, you can never be 100% sure of any claim based on empirical evidence, even though you can be very very confident).
 
Last edited:
  • #1,336
billschnieder said:
It took me 2 minutes to find this, and it looks worse than I had thought. Pay attention to how you characterized Bell's expectation value. Also pay attention to how you are factorizing ρ(λ), within the summation. There is no escape.
So you had to look to a discussion with a different person from 2009 to find an example? Anyway, if you look closely you'll see that I did mention the assumption that Alice and Bob were picking which box to scratch at random:
The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find the same fruit on at least 1/3 of the trials. For example, if we imagine Bob and Alice's cards each have the hidden fruits A+,B-,C+, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be
Maybe I should have been more explicit about the fact that there was a probability of 1/3 that Alice would scratch a given box on any trial, and likewise for Bob, but that was certainly my implicit assumption. And I have spelled it out more explicitly in other posts, for example this one from a discussion with you in June:
That description is fine, though one thing I would add is that in order to derive the inequality that says they should get the same fruit 1/3 or more of the time, we are assuming each chooses randomly which box to scratch, so in the set of all trials the probability of any particular combination like 12 or 22 is 1/9, and in the subset of trials where they picked different boxes the probability of any combination is 1/6.
So yes, I have always assumed the limit frequentist notion of probability in any of my discussions of the lotto card example, and the post you quoted makes perfect sense with that interpretation if you keep in mind that there is a probability (in limit frequentist terms) of 1/9 that Alice and Bob will pick any given combination of boxes on each trial.

As an aside, can you please edit your post to remove the LaTex code after the words "With a little creative algebra you can see the above can be rewritten as"? The equation there is stretching the window badly, making this page hard to read.
 
Last edited:
  • #1,337
RUTA said:
When it exhibits wave-like behavior. Once it interacts with its environment, it acquires definite position (particle-like behavior) per decoherence.
I like to muse that reality is waves in a hierarchy of media, that the true god's eye view would just see a bunch of interacting waveforms, some bounded or particle-like, and some not, some more persistent than others, etc.

However, at the level of our experience, we see cars and computers and planets and ... moons. I don't think it makes much sense to say that the moon pops into and out of existence depending on whether we happen to be looking at it. The whole quantum-speak thing can get quite silly -- detectors, moons, cats in various 'superpositions' of existing and not existing, of being here and there.

RUTA said:
RBW is not the only interpretation in which "non-interacting" means "non-existent." I got that idea from Bohr, Ulfbeck and Mottelson. Zeilinger has also been credited with that claim regarding photons.
It seems a bit silly to say that there's nothing moving from emitter to detector. Certainly the more sensible inference or hypothesis, and the one that practical quantum physics is based on, is that quantum experimental phenomena result from the instrumental probings of an underlying reality -- a reality which is presumably behaving according to some set of physical principles and which exists whether it's being probed or not.

Einstein's spooky action at a distance entails spacelike separated events determining, instantaneously, each other's existence. This is, prima facie, a nonsensical notion -- and Einstein was right to dismiss it.

RUTA said:
Absolutely, RBW is an ontological interpretation of QM. What in particular strikes you as unreasonable about this ontology? The non-existence of non-interacting entities (manifested as nonseparability of the experimental equipment)? Or, blockworld?
It's not unreasonable. Especially if you're a GR person. I just find it conceptually unappealing. Anyway, is there any way to know to what extent some theoretical construction is a description of 'reality'?
 
  • #1,338
JesseM said:
So can you please just answer the question: are you using (or are you willing to use for the sake of this discussion) the limit frequentist notion of probability, where "probability" is just the frequency in the limit as the number of trials goes to infinity?
billschnieder said:
No! I am not willing to pick and choose the definition of probability for argumentation purposes.
It's not "for argumentation purposes", it's for trying to understand what Bell actually meant, and how the probabilities in his derivation are interpreted by physicists. Your own argument which claims to show the derivation has very limited applicability is based on using a non-limit-frequentist interpretation of the probabilities in Bell's derivation. My claim is that this problem of limited applicability vanishes if we interpret the probabilities in his derivation in limit frequentist terms. Which is more likely a priori, that some guy posting on the internet is the first one to ever discover a major hole in Bell's derivation which never occurred to Bell or any other physicist, or that Bell and other physicists interpreted the probabilities in limit frequentist terms? (which again is a very common way to think about the meaning of probabilities, not some obscure notion I'm dragging up for the sake of being difficult) Are not even willing to consider that he might have been interpreting probabilities this way, to see if the problem of limited applicability would disappear in this case?
billschnieder said:
First you said it was ONLY the "frequentist" view you wanted. Now it is ONLY a particular variant of frequentism that you want
In post #1330 I linked back to an earlier discussion with you where I made clear that I was using "frequentist" probabilities to mean frequencies in the limit as the number of trials goes to infinity (what I am now calling 'limit frequentism' in hopes of avoiding exactly the sort of quibbling you're doing above), and an even earlier discussion with another poster from 2009 where I said the same thing, before you even started posting here. Hopefully this puts to rest the notion that I am somehow shifting my position, and if these posts don't convince you I again challenge you to find any posts by me discussing Bell inequalities where I haven't been talking in limit frequentist terms.
billschnieder said:
except when it involves coins and dice, you really use the "bayesian" view.
Nope, see the three paragraphs in post #1330 starting with "No. First of all, I'm not saying that the P(heads) is actually guaranteed..."
JesseM said:
No, the "standard mathematical definition" of an expectation value involves only the variable whose value you want to find the expectation value for, in this case the product of the two measurement results.
...
In the standard definition would give us:
\sum_{i=1}^N R_i P(R_i )
billschnieder said:
Wikipedia: http://en.wikipedia.org/wiki/Expected_value
<br /> E(g(X))\int_{-\infty}^{\infty} g(X)f(X)<br />
The wikipedia equation calculates an expectation value for a function of X rather than X itself, but the important thing is that the expectation value equations always can be reduced to a sum/integral over the product (possible value of variable in question)*P(variable takes that value), summed or integrated over all possible values. For example, if we define a new variable Y=g(X), then it can be shown that the above equation reduces to E(Y) = \int Y * P(Y), which is the "basic form" I have been talking about. This is easier to see if we consider a discrete X and Y, so we want to show that this:

\sum_i g(x_i )f(x_i )

reduces to this:

\sum_j Y_j * P(Y_j )

First consider the case in which each xi gives a unique Yj when plugged into g(x). Then in that case, the probability of a given Yj is naturally going to be the same as the probability of the corresponding xi, and Yj is equal to xi, so the above will be satisfied. On the other hand, suppose there are multiple possible values of xi which, when plugged into g(x), would give the same Yj. Then for that value of j, it is true that P(Yj) = (sum over all values of i for which g(xi)=Yj) f(xi). So in that case, it must be true that for a specific value of j, Yj*P(Yj) = (sum over all values of i for which g(xi)=Yj) g(xi)*f(xi). So from this it's not hard to see why \sum_i g(x_i )f(x_i ) reduces to \sum_j Y_j * P(Y_j )...I don't feel like writing out a formal proof, but if you don't see why what I said above guarantees it, just imagine we have five possible values of x, namely x1, x2, x3, x4, x5, and only two possible values of Y, Y1 and Y2, such that g(x1) = g(x3) = g(x4) = Y1, and g(x2) = g(x5) = Y2. Then if we write out \sum_i g(x_i )f(x_i ) it would be:

g(x1)*f(x1) + g(x2)*f(x2) + g(x3)*f(x3) + g(x4)*f(x4) + g(x5)*f(x5)

And since g(x1) = g(x3) = g(x4) and g(x2) = g(x5), we can gather together terms as follows:

g(x1)*[f(x1) + f(x3) + f(x4)] + g(x2)*[f(x2) + f(x5)]

And since g(x1)=Y1 and g(x2) = Y2, and since P(Y1) = [f(x1) + f(x3) + f(x4)] and P(Y2) = [f(x2) + f(x5)], the above reduces to:

Y1*P(Y1) + Y2*P(Y2)

...which is just \sum_j Y_j * P(Y_j ). Hopefully you can see how this would generalize to arbitrary sums \sum_i g(x_i )f(x_i ) and \sum_j Y_j * P(Y_j ), where every g(xi) yields some Yj.
billschnieder said:
You are way off base. Bell's equation two is the standard mathematical definition.The only difference between Bell's equation (2) and the last equation above, is that the symbols:
X = λ
g(X) = g(λ) = A(a,λ)*B(b,λ)
f(X) = ρ(λ)
The simplest mathematical definition deals not with the expectation value of a function of a random variable, but an expectation value of the random variable itself; i.e. not E(g(X)) = \int_{-\infty}^{\infty} g(X)f(X) but rather E(Y) = \int Y*P(Y). In any case, I don't really want to discuss the definition of "simple", my claim is just that all expectation values must reduce to that last form, and this is true of Bell's equation (2) as I showed in the derivation near the end of post #1335.
billschnieder said:
Bell is not trying to redefine anything. He is simply using the standard mathematical definition of expectation value for the paired product.
Any mathematician would understand that whatever form we choose to write an "expectation value", it can always be reduced to the form E(Y) = \int Y*P(Y). Since Bell was ultimately computing the expectation value for the product of the two measurements, if we let Y equal the sum of the two measurements it must be true that his expression can be reduced to (sum over all possible values of Y) Y*P(Y). And I showed that such a reduction is in fact possible (given Bell's physical assumptions) in post #1335.
billschnieder said:
Note the dλ, at the end of the expression! There is no expression in Bell's paper as the following:
E(a,b) = (+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (+1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
Your claim that such an expression is missing, because Bell was simplifying for physicists is a cop-out.
No, it's just something that would be understood implicitly by anyone well-versed in probability theory, it isn't necessary to state the obvious. But since it's not obvious to you, again see the explicit derivation in post #1335.
billschnieder said:
Furthermore, there is no mention of "limit frequentist", let alone "frequentist" in Bell's paper. You are invoking those terms now only to escape humiliation.
I have already linked to posts dating way back where I explained that I interpreted Bell's probabilities in terms of the frequencies in the limit as number of trials goes to infinity, so the idea that I am changing my tune to "escape humiliation" is silly. And no, Bell doesn't mention limit frequentism, but he also doesn't mention any other notion of probability like finite frequentism or Bayesianism, so it's up to readers to interpret the meaning of "probability" in Bell's paper. Again, limit frequentism is pretty much the default assumption in theoretical proofs involving probabilities in science, but even if this weren't true, the mere fact that his derivation has some major holes when his probabilities interpreted in non-limit-frequentist terms, but these holes might disappear when we interpret his probabilities in limit frequentist terms (that is my assertion anyway), is good enough reason for you to at least consider that he might have meant the probabilities in this way before triumphantly proclaiming you have found a flaw in Bell's reasoning that has somehow escaped the notice of every physicist who studied it until now. At least, you should consider this possibility if you have any intellectual integrity and want to do your best to figure out what Bell meant, as opposed to just wanting to make a rhetorical case against him by picking an interpretation designed to make him look bad.
 
  • #1,339
unusualname said:
I would think the resolution to QM interpretation will fall out rather easily once the "unification" people hit on the correct microscopic description of reality. I can't see how there could be much useful input the other way.

That's certainly the majority opinion. I think the best the foundations community can hope for is to find a new approach to unification, whereas a unified theory would certainly resolve all foundational issues.

As an example of how work in the foundations community might bear on the unification effort, our QM interpretation (Relational Blockworld) suggests a nonseparable Regge calculus approach to classical gravity (where nonseparable means "direct action" in the path integral approach). Obviously, changing classical gravity from Regge calculus (discrete, path integral version of GR) to nonseparable (direct action) Regge calculus, changes the quantum gravity program. It also changes what is meant by "unification," since the dynamical perspective, and therefore forces, are no longer part of a fundamental approach.

I didn't bring up unification per RBW to debate its merits, but merely to point out how the foundations community might contribute to the larger program of unification.
 
  • #1,340
ThomasT said:
However, at the level of our experience, we see cars and computers and planets and ... moons. I don't think it makes much sense to say that the moon pops into and out of existence depending on whether we happen to be looking at it. The whole quantum-speak thing can get quite silly -- detectors, moons, cats in various 'superpositions' of existing and not existing, of being here and there.

For most of us the phrase "not there when nobody looks" is simply a metaphor for the non-existence of non-interacting entities.

ThomasT said:
It seems a bit silly to say that there's nothing moving from emitter to detector. Certainly the more sensible inference or hypothesis, and the one that practical quantum physics is based on, is that quantum experimental phenomena result from the instrumental probings of an underlying reality -- a reality which is presumably behaving according to some set of physical principles and which exists whether it's being probed or not.

Einstein's spooky action at a distance entails spacelike separated events determining, instantaneously, each other's existence. This is, prima facie, a nonsensical notion -- and Einstein was right to dismiss it.

Well, if QM is right, one (or both) of these things has to go -- you can't have realism and locality. In our interpretation, we punt on realism, i.e., separability.

ThomasT said:
It's not unreasonable. Especially if you're a GR person. I just find it conceptually unappealing.

Most do :smile:

ThomasT said:
Anyway, is there any way to know to what extent some theoretical construction is a description of 'reality'?

That's a thorny epistemological question. Better leave that for another thread.
 
  • #1,341
billschnieder said:
There is no expression in Bell's paper as the following:
JesseM said:
E(a,b) = (+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (+1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
Your claim that such an expression is missing, because Bell was simplifying for physicists is a cop-out.
Incidentally, in case Bill or anyone else has any further doubts on this point, note that on p. 14 of the paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers Bell does write the expectation value in a basically identical form in equation (13):

E(a,b) = P(yes, yes|a,b) + P(no, no|a,b) - P(yes, no|a,b) - P(no, yes|a,b)
 
Last edited by a moderator:
  • #1,342
JesseM said:
Incidentally, in case Bill or anyone else has any further doubts on this point, note that on p. 14 of the paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers Bell does write the expectation value in a basically identical form in equation (13):

E(a,b) = P(yes, yes|a,b) + P(no, no|a,b) - P(yes, no|a,b) - P(no, yes|a,b)

I think you need more, like killing a werewolf... take the heart, the head, and burn the body. Even then, I somehow doubt that billy will concede anything. I enjoyed reading the paper however.

ThomasT: Why does the appealing or unappealing nature of an ontology matter? The only thing that is relevant is matching with empirical evidence, the science, and the math. I find the inevitability of death quite unappealing, but I don't doubt it as a result.
 
Last edited by a moderator:
  • #1,343
DrChinese said:
What does Fair Sampling have to do with my comment? If I predict a -1 every time, and you predict +1 every time, and it always comes up -1... Then it doesn't really much matter how often that occurs.
In this experiment photons are created in H/V base but measurements are performed in +45/-45 base (measurement x) and L/R base (measurment y).
If you measure linearly polarized light in base that is rotated by 45° you get completely uncertain result - +1 and -1 have equal probabilities.
If you measure linearly polarized light in circular polarization base you get completely uncertain result as well - +1 and -1 have equal probabilities.
So without detection bias prediction for any measurement in this case is 0.5. That means that composed result from all involved measurements (mind you not output but calculation composed of many different outputs and provided you have algorithm for that) half of the time gives -1 and half of the time gives +1 without detection bias.

Because both involved measurements does not give definite result without detection bias GHZ is comparison of two different detection biases.
So this type of experiments is pure test of fair sampling without involvement of definite outcomes based on particle properties.

DrChinese said:
As I have said a million times :smile: all science involves the fair sampling assumption. There is nothing special about GHZ or Bell tests in that regard.
I can not claim that I have said this a million times but I have responded like this at least once already:

Yes, that's right. All the science rely on different approximations including fair sampling assumption. But all the science except QM does not blame reality, causality and whatever else when it discovers contradiction in it's conclusions. Instead it admits error and reexamines it's assumptions (including fair sampling assumption) one by one until it resolves contradiction.
So all the science involves the fair sampling assumption but all the science have quite strict rules when to give up fair sampling assumption.

DrChinese said:
And as I have also said too many times to count: if the GHZ result is due to some unknown weird bias... what is the dataset we are sampling that produces such a result? I would truly LOVE to see you present that one! Let's see:

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...
Actual sample=Oops!

DrChinese said:
Actually, I had the predictions of LR and QM reversed in my little sample. It should be more like:

QM=+1, +1, +1, ...
LR=-1, -1, -1, ...
From the article you linked:
"First, one performs yyx, yxy, and xyy experiments. If the results obtained are in agreement with the predictions for a GHZ state, then the predictions for an xxx experiment for a local realist theory are exactly opposite to those for quantum mechanics."

So the dataset consists of outcomes for each of yyx, yxy, xyy and xxx experiments.
There are 8 possible different outcomes for each of those 4 experiments that are not even conducted at the same time. 16 of those possible different outcomes (8 outcomes * 4 experiments) are observed much more frequently than other 16. So please provide an algorithm how you get your "output" from 32 different outputs observed in experiment at different times for different setups.

DrChinese said:
See this article from Zeilinger and Pan:

Multi-Photon Entanglement and Quantum Non-Locality (2002)

"Comparing the results in Fig. 16.7, we therefore conclude that our experimental results verify the quantum prediction while they contradict the local-realism prediction by over 8standard deviations; there is no local hidden-variable model which is capable of describing our experimental results."
From the same article:
"If we assume the spurious events are just due to experimental errors, we can thus conclude within the experimental accuracy that for each photon, 1, 2 and 3, quantities corresponding to both x and y measurements are elements of reality. Consequently, a local realist, if he accepts that reasoning, would thus predict that for a xxx experiment only the combinations V'V'V',H'H'V',H'V'H', and V'H'H' will be observable (Fig. 16.6b)."

This type of reasoning is not only dispensable for ensemble interpretation but it is even contradicting ensemble interpretation. That's because it completely ignores the role of ensemble in determining outcome of measurement.
 
  • #1,344
zonde said:
So this type of experiments is pure test of fair sampling without involvement of definite outcomes based on particle properties. So all the science involves the fair sampling assumption but all the science have quite strict rules when to give up fair sampling assumption.

From the article you linked:
"First, one performs yyx, yxy, and xyy experiments. If the results obtained are in agreement with the predictions for a GHZ state, then the predictions for an xxx experiment for a local realist theory are exactly opposite to those for quantum mechanics."
...

So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.
 
  • #1,345
zonde said:
In this experiment photons are created in H/V base but measurements are performed in +45/-45 base (measurement x) and L/R base (measurment y).
If you measure linearly polarized light in base that is rotated by 45° you get completely uncertain result - +1 and -1 have equal probabilities.
If you measure linearly polarized light in circular polarization base you get completely uncertain result as well - +1 and -1 have equal probabilities.
So without detection bias prediction for any measurement in this case is 0.5.
What about this you do not understand?

DrChinese said:
So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.
Yes of course. You tell me what I predict and then easily refute my prediction.
You know how this is called?
A strawmen.
 
  • #1,346
zonde said:
1. What about this you do not understand?

2. Yes of course. You tell me what I predict and then easily refute my prediction.
You know how this is called?
A strawmen.

1. Nothing. What's your point?

2. You are the local realist, what do YOU predict for the xxx case? Does it match QM or not?
 
  • #1,347
charlylebeaugosse said:
Max Jammer indeed, but the book is (in Amazon):

The Philosophy of Quantum Mechanics: The Interpretations of Quantum Mechanics in Historical Perspective by Max Jammer (Hardcover - June 1974)
5 used from $79.99

5.0 out of 5 stars (1)

Fines book is:
The Shaky Game (Science and Its Conceptual Foundations series) by Arthur Fine (Paperback - Dec. 15, 1996)
Buy new: $25.00 $22.28

12 new from $21.00
13 used from $17.58

Get it by Friday, Aug. 13 if you order in the next 22 hours and choose one-day shipping.
Eligible for FREE Super Saver Shipping.
Only 3 left in stock - order soon.
3.5 out of 5 stars (2)

There is from there an easy way to get to original writings by Einstein.

As for Einstein's realism, he did believe that the Moon did not even need apes I would bet.
But the real issue, I think, is realism at the microscopic level.
So you think that he would not have been a microscopic realist in the EPR sense? Specifically, if two entangled particles can each be measured on either of two or more noncommuting properties X and Y (like position and momentum), and measuring the value of property X for particle #1 allows us to determine with probability 1 what the value of property X would be for particle #2 if we measured property X for particle #2, then I understand the EPR paper to suggest this means there must be a local "element of reality" associated with particle #2 that predetermines the result it would give for a measurement of property X, even if we actually measure property Y for particle #2.

This quote by Einstein from p. 5 of Bell's paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers does suggest to me he favored microscopic realism in the EPR sense:
If one asks what, irrespective of quantum mechanics, is characteristic of the world of ideas of physics, one is first of all struck by the following: the concepts of physics relate to a real outside world ... It is further characteristic of these physical objects that they are thought of as arranged in a space time continuum. An essential aspect of this arrangement of things in physics is that they lay claim, at a certain time, to an existence independent of one another, provided these objects "are situated in different parts of space".

The following idea characterizes the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B ...

There seems to me no doubt that those physicists who regard the descriptive methods of quantum mechanics as definitive in principle would react to this line of thought in the following way: they would drop the requirement ... for the independent existence of the physical reality present in different parts of space; they would be justified in pointing out that the quantum theory nowhere makes explicit use of this requirement.

I admit this, but would point out: when I consider the physical phenomena known to me, and especially those which are being so successfully encompassed by quantum mechanics, I still cannot find any fact anywhere which would make it appear likely that (that) requirement will have to be abandoned.

I am therefore inclined to believe that the description of quantum mechanism ... has to be regarded as an incomplete and indirect description of reality, to be replaced at some later date by a more complete and direct one.
charlylebeaugosse said:
I am only bothered a lot by all lies and false info that have lead us to a situation where more physicist (in or close to QM) would relinquish locality and not realism at the miscroscopic level.
Do you really think it's true that "most physicists" would prefer to relinquish locality and not realism? If that were the case I would think Bohmian mechanics would be much more popular! Instead it seems to me that both the Copenhagen interpretation (which abandons 'realism') and the Many-worlds interpretation (whose 'realist' status depends somewhat on how you define 'realism', but it is an interpretation that many advocates say is a completely local one, see my post #8 on this thread for some references along with my own toy model illustrating how a local interpretation involving multiple copies of each experimenter can explain Bell inequality violations without being non-local) are a lot more popular, see some of the polls linked to here.
charlylebeaugosse said:
Also, I only add my physicist's sensitivity to real work done by Jammer and Fine (see also the conference where Fine (?), Jammer, Peirls and Rosen contributed for the 50th anniversary of EPR and other paper here and there, mostly the correspondence of Einstein (mainly with Born, but there are other gems), the Schlipp book, and one pocket book on AE's views on the world where there is more politics than physics but some good pieces anyway) and as much reading of Einstein as I could put my hands on. But as I do not read German, I loose lots of first hand material.
Any chance you could post some of Einstein's quotes that you think show he was not a "naive realist" or would not have agreed with the ideas in the EPR paper? If it would take too long to find them and type them up, I will understand of course.
 
Last edited by a moderator:
  • #1,348
DrChinese said:
So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.

DrC is so right here: see the papers or books on GHZ. The contradiction occurs on every occurrence. Contrary to Bell's inequality- based Bell's Theorem, the GHZ sort, "Bell's Theorem without inequalities" does not use any statistical hypothesis as the story is:
Realism + Locality => An false equality (for each samplE, rather than for some ideal samplING).

No I had asked if anyone has seen a nice explanation of how locality is used. Any hint?
 
  • #1,349
unusualname said:
Is that supposed to be funny? Modern theoretical physics is mainly mathematical physics, in fact it's been that way for a century or so, the last great achievements by non-mathematicians was probably back in Faraday's time.

The foundations of QM have been debated for nearly a century by many great thinkers, and the conclusion is that nothing will get resolved by "word" arguments about interpretations, there needs to be a model to back up the argument and that model has to be in the language of mathematics.

Of course we need experimental results from which to check our models, and in relation to the question of this thread we have Bell experiments of Aspect et al, GHZ and delayed choice erasure experiments all of which suggest non-locality unless you are a deluded person who thinks a classical explanation makes sense. (The other explanations in terms of reinterpreting reality may have their time, but let's give the physics a chance before opening the gates for the philosophical hordes)

The most promising current model that might account for non-locality seems to be the Holographic Principle, but to properly understand that you need to understand its origins in the work of Bekenstein and Hawking in the 70s on Black Hole Thermodynamics, then you need to understand how it works with current models in String Theory, LQG etc.

This is difficult stuff, with a heavy dose of mathematical formalism. It is the arena where the useful debate about understanding the universe is taking place, not the pseudo philosophical word-play that goes on in these forums.

If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:


Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)
I was a bit jocking, but how many Nobel prizes in physics cover papers whose main content was one or more theorems (in a sense accepted by mathematicians). And isn't it true that most mathematical physicists are homed in math dept (a bit less since super-string took control of the budgets in HEPhysics, but
1) what proportion of physicists consider superstring .
2) what proportion of physicists consider superstring as physics.
My main points was in fact that such statements and questions (including mine here) seem far from the subject, and far from physics. As I said, I have an immense consideratio for mathematical physicists.
 
  • #1,350
zonde said:
Let me give longer quote from Einstein essay:

But... this is an essay from 1949. How can this relate to Bell's Theorem?

zonde said:
So I think that Einstein would have discarded without regret any restrictions placed by orthodox QM on local realistic interpretation.

I don’t agree. As you state yourself:

zonde said:
Einstein was die hard empiricist.

I absolutely do not think Einstein would start looking for farfetched loopholes etc. He was way too smart for that. I think he would have accepted the situation, for the start of something new.

zonde said:
Restriction I am talking about is that the same measurement settings at both sites should give the same outcome with probability of 1.

Well, this is pretty obvious, isn’t it?? The completely "new thing" is when polarizers are nonparallel!? Einstein would of course immediately have realized that his own argument had boomeranged on him:
no action on a distance (polarisers parallel) ⇒ determinism
determinism (polarisers nonparallel) ⇒ action on a distance

zonde said:
If we view Ensemble Interpretation as physically realistic interpretation and not as some other metaphysical interpretation we of course can not talk about some "Global RAM".
We can talk only about some "local RAM" that is justifiable by physical dynamics inside equipment used in experiments.

If we decide to have very long intervals between every entangled pair we should expect complete decoherence of entanglement.

Are you saying that if we run an EPR-Bell experiment as I proposed, we "should expect complete decoherence of entanglement" and the experiment would fail? No expected QM statistics??
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K