Is action at a distance possible as envisaged by the EPR Paradox.

  • #1,301
RUTA said:
(responding to Devils...) I don't know of any limit to the size of something that can be "screened off" in principle. In practice? Well, that's another matter :smile:
Ruta, Devilsavocado, DrChinese, etc. (people of good will and not realist): it seems to me that

- 1) Phase one is making sure of non-existence of classical realism at the miscroscopic scale.

- 2) Next, deciding if quantal realism holds true or not (I mean here a form of realism defended by some advocates of CQT (Consistent Quantum Theory))

- 3)Later then seeing the issue about the moon.

- 4) Even later, or any time but this is hard, checking if HVs compatible with QM (a theory that would permit exact predictions, but never on both members of conjugate pairs) can be constructed, if they would help, and how to do it: at some point we will have to negate or support scientifically the belief of Heisenberg and others that "one should not look for other variables". As long as one speaks of the HVs of de Broglie, Bohm, or Bell, no question left in my mind, but for some that respect the UP, I do not know if they can exist nor if they could help in anything if found. Solving that positively would be an immense achievement, but the other questions seems more in present time reach, at least the first one.

The above is a proposal for emergencies ordering (1 to 4, but I may have forgotten steps or independent questions that should belong here), but if anyone can solve the later items before the earlier ones, that is fine: I just would not like to spend time defending vague ideas about the moon while the crucial problem of local realism calls for a solution (thanks to Dr Bell planting a doubt in many minds, but as I said, if the issue can be decided by physics, it was worthwhile the big confusion about locality and the miss - attributions to Einstein starting with Bell).
 
Last edited:
Physics news on Phys.org
  • #1,302
charlylebeaugosse said:
Ruta, Devilsavocado, DrChinese, etc. (people of good will and not realist): it seems to me that

Phase one is making sure of non-existence of classical realism at the miscroscopic scale.

Next, deciding if quantal realism holds true or not.

Later then seeing the issue about the moon.

This is a proposal for emergencies ordering, but if anyone can solve the later items before the earlier ones, that is fine: I just would not like to spend time defending vague ideas about the moon while the crucial problem of local realism calls for a solution (thanks to Dr Bell planting a doubt in many minds, but as I said, if the issue can be decided by physics, it was worthwhile the big confusion about locality and the miss - attributions to Einstein starting with Bell).

Agreed, the issue will only be decided by physics. I think the goal of threads like this isn't to decide the issue, but merely debate the possibilities.
 
  • #1,303
RUTA said:
I'm using the term "Einsteinian reality" generically to mean "local and separable." I have no idea what he would say, I wouldn't even begin to argue that.

Ruta, you see, "local and separable" is precise. "Einsteinian reality" to the contrary depends on what you know of Einstein's writing, and others, including misinformation propagated by people who want to show themselves better than Einstein (what would be the public value of being better than Podolsky?). Also, in "Einsteinian reality" there is reality, and while I (we?) love locality and separability, reality is for me THE ENEMY, so attributing him to the great guy does not help, except the glory of people who have greatly contributed t the confusion (see the book of Asher Peres, and let me know if he does not hint at QM being non-local, or look at the very public writings of the great Roger Penrose that deal with Bell's Theory, and let me know if he does not propagate the dark forces?). If no-locality was only defended by imbeciles, we would not have to worry. Bell was a crypto-realist who revealed himself later as realist, this pushed hard the fact that QM is non-local.
Now: look at Bell 1964, and check by yourself that Bell claim that QM had been proved to be non-local so that one should try HVs to restore its locality: see the 2 first sentences.
Now the Bell Theorem as state it there is about predictive HVs compatible with QM, but isn't non-locality suggested as a way out? So please be precise here, e.g., if you have students (graduate or else) or read papers that have some success. Lack of truth or words that can help that have to chased actively.
 
  • #1,304
RUTA said:
Agreed, the issue will only be decided by physics. I think the goal of threads like this isn't to decide the issue, but merely debate the possibilities.

I won't mind being part of a group that solves a question (but perhaps I do not care about having publication with one or few authors). In fact, groups formed over the www might be the best chance of progress in some very hard questions, including precise questions in physics. This being said, when you write "the issue will only be decided by physics", do you mean that you are sure that physics can solve that or that if there is a solution it can only come from physics (sorry for the mad precision, but I have spent a few years in pure math before coming back to physics (but not mathematical physics that is for me a branch of applied math that requires talents that I do not have)). I do believe though that it would be useful to all be as precise as we can, be it only to avoid unnecessary confusions and disputes due only to misunderstanding. I feel some convergence, despite apparent divergences of writing: we might mostly need to adjust vocabulary.
 
  • #1,305
charlylebeaugosse said:
This being said, when you write "the issue will only be decided by physics", do you mean that you are sure that physics can solve that or that if there is a solution it can only come from physics.

Both: I'm confident that physics can solve this and the solution will, ipso facto, come from physics.
 
  • #1,306
charlylebeaugosse said:
I won't mind being part of a group that solves a question (but perhaps I do not care about having publication with one or few authors). In fact, groups formed over the www might be the best chance of progress in some very hard questions, including precise questions in physics. This being said, when you write "the issue will only be decided by physics", do you mean that you are sure that physics can solve that or that if there is a solution it can only come from physics (sorry for the mad precision, but I have spent a few years in pure math before coming back to physics (but not mathematical physics that is for me a branch of applied math that requires talents that I do not have)). I do believe though that it would be useful to all be as precise as we can, be it only to avoid unnecessary confusions and disputes due only to misunderstanding. I feel some convergence, despite apparent divergences of writing: we might mostly need to adjust vocabulary.

Hardly any remaining questions in physics have been or will be solved by people who aren't competent in mathematical physics. At the very least you should be aware of the existing models and their dificiencies before attempting to "solve" any questions.

Bell's result attracts amateurs and crackpots since it can be understood without a huge investment of effort into learning real mathematics and physics. Unfortunately, the resulting discussions are mostly an amusing illustration of mental difficulties rather than anything worthwhile.
 
  • #1,307
unusualname said:
Hardly any remaining questions in physics have been or will be solved by people who aren't competent in mathematical physics. At the very least you should be aware of the existing models and their dificiencies before attempting to "solve" any questions.

Bell's result attracts amateurs and crackpots since it can be understood without a huge investment of effort into learning real mathematics and physics. Unfortunately, the resulting discussions are mostly an amusing illustration of mental difficulties rather than anything worthwhile.

TYVM. I do have over 120 papers, collaboration with some of leading figures in math, a long past in physics as well, and about 80 patents. Yet I have seen in these pages, besides stupid remarks, posts by a few people who either are smart professionals, or that we miss in the labs. Most of the stupid hings about Bell Theory were written by pros: I have no much patience with those papers, and even less with non-professional writings, except if it of very good quality. The same applies to delayed choice, delayed erasure, interferences in general, but of course Bell and related matter is the main crackpots attractor. Yet I think it worthwhile to see if collective thinking can lead us to otherwise hard or get results. I have collaborated all my life and am curious of the value of large scale collaboration (on a single well defined theory problem). We'll see...
 
  • #1,308
charlylebeaugosse said:
As I have developed in previous posts, on the basis of writings of Einstein, Fine, and Jammer, Einstein was not a naive realist, at least after 1927, and in fact provided the first (only so far) proof of non-realism in 1931 with Tolman and Podolsky.
When you say he was not a "naive realist", is that in contrast with some other form of realism, or do you think he was not a realist of any kind? And you mention Jammer, is that Max Jammer's book "Einstein and Religion" or some other publication? (if it is that book, do you know what pages discuss Einstein's views on realism?) Also, what publications of Einstein and Fine are you referring to?
 
  • #1,309
charlylebeaugosse said:
TYVM. I do have over 120 papers, collaboration with some of leading figures in math, a long past in physics as well, and about 80 patents. Yet I have seen in these pages, besides stupid remarks, posts by a few people who either are smart professionals, or that we miss in the labs. Most of the stupid hings about Bell Theory were written by pros: I have no much patience with those papers, and even less with non-professional writings, except if it of very good quality. The same applies to delayed choice, delayed erasure, interferences in general, but of course Bell and related matter is the main crackpots attractor. Yet I think it worthwhile to see if collective thinking can lead us to otherwise hard or get results. I have collaborated all my life and am curious of the value of large scale collaboration (on a single well defined theory problem). We'll see...

My PhD was in general relativity, but I've been working in the foundations community since 1994. It's just my impression (and I'm a nobody ... ), but I haven't seen any real collaboration, per se. There are some general "groups," the largest seems to be Many Worlds, then the Bohmians, followed by variations on backwards causation, but within any "group" it's pretty much a collection of independent researchers -- nothing unified like research in string theory. I don't know the social dynamics, all I can report is what I perceive. The point is, I wouldn't hold out much hope of generating a large scale unified assault on this problem :smile:

Let me ask you, what approach are you looking to advance?
 
  • #1,310
RUTA said:
My PhD was in general relativity, but I've been working in the foundations community since 1994. It's just my impression (and I'm a nobody ... ), but I haven't seen any real collaboration, per se. There are some general "groups," the largest seems to be Many Worlds, then the Bohmians, followed by variations on backwards causation, but within any "group" it's pretty much a collection of independent researchers -- nothing unified like research in string theory. I don't know the social dynamics, all I can report is what I perceive. The point is, I wouldn't hold out much hope of generating a large scale unified assault on this problem :smile:

Let me ask you, what approach are you looking to advance?

I think about some experiments (thought and/or real) that may help establish or help significantly the non-realist point of view (to be co-authored by all people whose contribution is used, more or less, and in anonymous form if people insist-in which case the PF pseudos would be acknowledged as representing contributors. I have several lines of ideas in mind, probably some based on mistakes of mine. I would propose a few from one line to start with. OR I would start with a less ambitious project such as the analysis of Wheeler Delay type experiments (and then would hope to have Cthugha for instance on board - I am relatively new to QM (6 years) where I hope to bring my experience in qualitative methods acquired in non-linear dynamics (mostly, both math and physics): I am bad at what most pros are good at and better in arcane methods and view points. In fact, what I am most interested in is try this idea that the www can help create big collective brains. This is more important that the first question(s) that would be solved as then, many other could follow. The main reason to have soon a few teams on a few subjects would be to explore what rules work best. We could perhaps even begin with two threads on the same basic subject (two questions about said subject), one with full freedom, one where a subgroup would soon form a sort of police on what is relevant or not and taking care of re-launching the life when needed. That might be more fun and contribution than solving one physics question (of course, not great for people still looking for a job, or a Ph. D.). Maybe I am turning into a relatively young crackpot after all.
 
  • #1,311
charlylebeaugosse said:
Ruta, Devilsavocado, DrChinese, etc. (people of good will and not realist):

...

I won't mind being part of a group that solves a question


I think RUTA and unusualname has some really good points here. Yes, it would be marvelous to put together a group and solve some real mysteries in science, but I think that is to underestimate the problem, to say at least... Right now, in this very thread, we are experiencing "one" who has wander off into "The Hazy Swamp of Crackpots of No Return", believing he has solved "everything" alone, using nothing else but probability. While the real probability for doing just that is not good, not good at all, at least if you are alone...

Fundamentally, EPR-Bell is not a question (or fight) between locality/realism/FTL/LHVT, etc – it’s much bigger than that (and I think RUTA agrees?). The genius(es) that solves this question are going to present the next paradigm in physics, where QM + GR + Gravity = True, and perhaps even TOE.

To me EPR-Bell is a parallel to the Michelson–Morley experiment, and what followed after that, but even more complicated (to solve at least).

I don’t think this is something one solves in a discussion over internet. It’s just too big.

Furthermore, I think it’s a big mistake to make any hasty conclusions on what is right or wrong, if you plan to solve this... no offense, but talking about "dark forces" and "THE ENEMY" and so on, can’t be fruitful before we know for sure, can it?

Also if we look back, it all becomes a little 'amusing'. For many nonlocality is repulsive, unnatural, etc, but it was not that long ago that one of the brightest minds in history, Isaac Newton, found his own law of gravity and the notion of "action at a distance" deeply uncomfortable, so uncomfortable that he made a reservation in 1692:
That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it.


This is funny! And future generations will of course laugh at us and our current 'limitations'! :smile:

And it also fun to discuss this and learn more, so let’s continue! :wink:
 
  • #1,312
JesseM said:
When you say he was not a "naive realist", is that in contrast with some other form of realism, or do you think he was not a realist of any kind? And you mention Jammer, is that Max Jammer's book "Einstein and Religion" or some other publication? (if it is that book, do you know what pages discuss Einstein's views on realism?) Also, what publications of Einstein and Fine are you referring to?
Max Jammer indeed, but the book is (in Amazon):

The Philosophy of Quantum Mechanics: The Interpretations of Quantum Mechanics in Historical Perspective by Max Jammer (Hardcover - June 1974)
5 used from $79.99

5.0 out of 5 stars (1)

Fines book is:
The Shaky Game (Science and Its Conceptual Foundations series) by Arthur Fine (Paperback - Dec. 15, 1996)
Buy new: $25.00 $22.28

12 new from $21.00
13 used from $17.58

Get it by Friday, Aug. 13 if you order in the next 22 hours and choose one-day shipping.
Eligible for FREE Super Saver Shipping.
Only 3 left in stock - order soon.
3.5 out of 5 stars (2)

There is from there an easy way to get to original writings by Einstein.

As for Einstein's realism, he did believe that the Moon did not even need apes I would bet.
But the real issue, I think, is realism at the microscopic level. But I would not like to be considered as an unconditional supporter of A.E., since I am not. I am only bothered a lot by all lies and false info that have lead us to a situation where more physicist (in or close to QM) would relinquish locality and not realism at the miscroscopic level.
For me, it is like having Algebra under the control of bandits, or biology controlled by the creationists. I have posted a lot on my views of Einstein's realism so I would rather stop on that for a while. Also, I only add my physicist's sensitivity to real work done by Jammer and Fine (see also the conference where Fine (?), Jammer, Peirls and Rosen contributed for the 50th anniversary of EPR and other paper here and there, mostly the correspondence of Einstein (mainly with Born, but there are other gems), the Schlipp book, and one pocket book on AE's views on the world where there is more politics than physics but some good pieces anyway) and as much reading of Einstein as I could put my hands on. But as I do not read German, I loose lots of first hand material.
 
  • #1,313
RUTA said:
I don't know of any limit to the size of something that can be "screened off" in principle. In practice? Well, that's another matter :smile:

Okay, thanks. I think I have a question regarding "screened off" + BB + CMB... but I must think it over. Hope to see you tomorrow!
 
  • #1,314
DevilsAvocado said:
Furthermore, I think it’s a big mistake to make any hasty conclusions on what is right or wrong, if you plan to solve this... no offense, but talking about "dark forces" and "THE ENEMY" and so on, can’t be fruitful before we know for sure, can it?

Also if we look back, it all becomes a little 'amusing'. For many nonlocality is repulsive, unnatural, etc, but it was not that long ago that one of the brightest minds in history, Isaac Newton, found his own law of gravity and the notion of "action at a distance" deeply uncomfortable, so uncomfortable that he made a reservation in 1692:



This is funny! And future generations will of course laugh at us and our current 'limitations'! :smile:

And it also fun to discuss this and learn more, so let’s continue! :wink:

Action at a distance was very odd indeed, like the lack of realism (even if only microscopic) is now. That is what I have read too. I do believe that the reason why locality is more abandoned than realism by professionals of quantum physics is as follows:
Realism is coded in our brain for millions of years, or at least 100,00 years or about, while
the discovery of finite speed of light is very new.

Modern physics has been marked by the destruction of credos (simultaneity, continuity, parity, etc...). Destroying realism by physicist argument, for good (and not as a new credo) would be great. Perhaps more modest goals should be tried first.

BTW: Someone wrote about the need of mathematical physicists in order to solve any big problem. What have they brought to physics that is acknowledged by the rest of the physicists? I have great respect for them, some of the best ones are my friends, but their contributions are more considered as math. There is a funny story about Simon and Feynman where RF asked BS "who are you young man" to which "BS" answered "I am BS", to which it was replied: and "what is your field?". adn BS comments: can you imagine that F did not know about my work? i.e., for me: BS did not even understand that RF couldn't care less about the type of things he was doing.

I hope that mathematical Physicist will have some recognition as physicists some day. Some of them have deep physical intuition beside tremendous technical power, but so far, ...

The power of collective thinking is worthwhile trying if the crackpots are kept away de-facto by ignoring them and if we get some of the people I have seen in my short experience with Phys-Forum. 99% of chance of failure, perhaps 99.99%. But solving one problem would be great, and perhaps those interested in the experience should find 1 or 2 problems that are "a bit" less ambitious than the issue of microscopic realism. Anyway I have spent some of my life trying to solve questions with ow odd for solutions, which helped me solving lesser questions. I would not advise a grad student to spend (much) time on that, but think of the reward if we even only begin to understand how to solve hard question as an open group we can get in other threads to advertise those with questions being attacked, can't we? Now for the "enemy" and the "dark forces", it is not defined by the position but by the use (knowing the truth) or not of false information. I was a realist most of my life and a supporter of non-locality when I heard about the subject and talked to some of the lead authors: this is what brought me into the field. I would not consider badly people with provably wrong positions if I am convinced that they do that by ignorance. Anyway, I do not expect everyone to be exited by trying, and I am prepared to failure, as when I tackle "hard problems" (or very hard ones). I do not expect people to spend much energy before some hope of success becomes a bit more reasonable.
 
  • #1,315
JesseM said:
billschnieder said:
I gave you an abstract list. No mention of anything such as trial. No mention of anything such a physical process. I asked you to give me the probability of one of the entries from the list, and you told me it was impossible despite the fact that this is what is done everyday in your favorite frequentist approach to probability.
Not if we are excluding "finite frequentism", which I already told you I was doing.
So you you are saying if you were not exclusing "finite frequentism" you will be able to give an answer? So, you are effectively picking and triming your definition of probability for argumentative purposes as more of your statements will show below. You are not being serious.

JesseM said:
Does your list of four give us enough information to know the frequency of ++ in the limit as the sample size goes to infinity?
Bah! This list is the entire context of the question! The list is the population. True probability of the (++) in the list, is the relative frequency of (++) in the list. This is the frequentist approach, which you now want to abandon in order to stay afloat.


JesseM said:
billschnieder said:
When ever you say the probability of Heads and Tails is 0.5 you are doing it, whenever you say the probability of one face of a die is 1/6, you are doing the exact same thing you now claim is impossible. Go figure.
[bu]No, in those cases I am just using the physical symmetry of the object being flipped/rolled to make a theoretical prediction about what the limit frequency would be[/u], perhaps along with the knowledge that empirical tests do show each option occurs with about equal frequency in large samples

Hehe! Do you know of anybody who has ever performed an infinite number of coin or die tosses? I think not. So you can not know what the limit will be as the number of tosses tends toward infinity. And since you have continued to insist on your ridiculous idea that the "true probability" must be defined as the limit as the number of trials tends towards infinity, the above response is very telling.

Furthermore, did you really think I will not notice the fact that you have now abandoned your favorite frequentist approach and now you are using the bayesian approach (see underlined text above) to decide that the P(Heads) = 0.5. If you can use symmetry of the coin to decide that P(Heads) = 0.5, why couldn't you also use symmetry of my abstract list to decide that P(++) is 1/4?? I'm sure if I looked, I will not need to look hard to find a post in which you wrote a list not very different from mine and also wrote P(++) to be 1/4 or similar, without having performed an infinite number of damned "trials". So as I mentioned earlier, you are not being serious, just finding anything you can hang-on to, even if it means contradicting yourself.

JesseM said:
billschnieder said:
I already gave you the answer which is 1/4.
Yes, and that answer is incorrect if we are talking about the "limit frequentist probability", as I already made clear I was doing.

You say it is impossible to calculate an answer, then when I give you the answer, you then say the answer is wrong. How do you know it is wrong, if you are unable to calculate the correct one? You are way off base, and the answer is correct in ANY probabilistic approach.

The relative frequency of (++) in my list is 1/4. Since my list is the population, P(++)=1/4 you do not need any trials to determine this.

JesseM said:
JesseM said:
Note that the wikipedia article says "close to the expected value", not "exactly equal to the expected value".
JesseM said:
An "expectation value" like E(a,b) would be interpreted in frequentist terms as the expected average result in the limit as the number of trials (on a run with detector settings a,b) goes to infinity
Um, how do you figure? The two statements of mine are entirely compatible, obviously you are misunderstanding something here
It is quite clear from the two statements that if average from the law of large numbers is close to but not equal to the true expectation value, it can not be the definition of the expectation value! Which one is it? The definition of the expectation value can not at the same time be only approximately equal to it!?

JesseM said:
Yes, and with that context there isn't enough information to estimate the limit frequentist probability which is the only notion of probability I want to use
...
billschnieder said:
You can visualize it by thinking that if you would randomly pick an entry from the the list I gave you
Well, that's an entirely separate question, because then you are dealing with a process that can repeatedly pick entries "randomly" from the list for an arbitrarily large number of trials. But you didn't say anything about picking randomly from the list, you just presented a list of results and asked what P(++) was.
It is not an entirely separate question. I did not mention any trials in my question. But you have stated that the only notion of probability you want to use is the "limit frequentist probability", even though initially you just said "frequentist", but if you want to stick to that limited approach, which is only interested in "trials", you could still have provided an answer to the question by imagining what the limit will be if you actually randomly picked items from my list. Is it your claim that this is also impossible?

Secondly, despite my repeated correction of your false statements that I presented "results" or "trials", you keep saying it. You quickly jumped to claim I never mentioned trials, yet in the next sentence, you say I presented "results", even though I never characterized the list as such, and corrected your attempts to characterize it as such multiple times! You are not being honest.
 
Last edited:
  • #1,316
JesseM said:
Failing to do the specific thing I said it should do, yes.

According to you, the wikipedia article is wrong. Why don't you correct it. It is obvious you are the one who is way off base and you know it. All the grandstanding is just a way to stay afloat, not a serious argument agains the well accepted meaning of expectation value.

Wikipedia: http://en.wikipedia.org/wiki/Mean
In statistics, mean has two related meanings:
* the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).
* the expected value of a random variable, which is also called the population mean.

Wikipedia: http://en.wikipedia.org/wiki/Expected_value
In probability theory and statistics, the expected value (or expectation value, or mathematical expectation, or mean, or first moment) of a random variable is the integral of the random variable with respect to its probability measure.

That you continue to pursue this strange objection to standard mathematics is very telling.

JesseM said:
billschnieder said:
The law of large numbers says if you would randomly pick a large number of pairs from our given abstract list, the average value will get close to the true expectation value as the number of pairs you pick tends towards infinity.
Again, you said nothing about "randomly picking" from a list, you just gave a list itself and asked for the probabilities of one entry on that list.
Yes, that is exactly what I did, and you answered that it was impossible to do because you wanted to use ONLY a probability approach that involved "trials". So I said, if you really were serious about using ONLY a probability approach that involved "trials", you would have imagined randomly picking an infinite number of pairs from the given list, and still be able to give an answer very close to the "true expectation" I wanted which is simply the relative frequency of (++) in my list, obtained without any trials. You do the same thing for dice and coins and you have done the same thing in you famous scratch-lotto examples, but when doing it here would have proven fatal to your line of argument, you balked.


JesseM said:
Well, excuse me for thinking your question was supposed to have some relation to the topic we were discussing, namely Bell's theorem.
While discussing Bell's INEQUALITIES, Not Bell's theorem which we haven't discussed at all, you claimed, and continue to claim that Bell's equation (2) is not a standard mathematical definition for the expectation value of a paired product. So we went down this rabbit trail in order to force you to admit that you are wrong, or be humiliated in the process of trying to defend the ridiculous claim. And you know very well that once you admit that you were wrong, you have no valid response to the rest of my argment, so you are standing your ground. Even though you know fully well that what you are arguing is wrong and borderline dishonest.

JesseM said:
Only if you assume by symmetry that it's a "fair" die or coin, in which case you have a reasonable theoretical basis for believing the "limit frequency" of each result would appear just as often as every other one. If you had an irregularly-shaped coin (say, one that had been partially melted) it wouldn't be very reasonable to just assume the limit frequency of "heads" is 0.5.

I gave you the list [(++), (+-), (-+), (--)] and you claimed it was impossible to calculate the probability of (++) in the list. So had I given you [(++), (++), (+-), (-+), (--)] and asked the same question, you will still have claimed it was impossible. But anyone who has ever heard anything about probability will immediately realize that each item in the list occurs once and since there are 4 items, P(++) must be 1/4 in the list, for the second one, P(++) will be 2/5. I haven't done anything here other than use the symmetry which is present in the given list to calculate the probability. But you already said those values are wrong, which is very telling.

JesseM said:
billschnieder said:
In statistics, if you are given the population, you can calculate the true probabilities without any trials. It is done every day in the frequentist approach, which you claim to understand!
Not in the "limit frequentist" approach where we are talking about frequencies in the limit as number of times the population is sampled approaches infinity (unless we make some auxiliary assumptions about how the population is being sampled, like the assumption we're using a process which has an equal probability of picking any member of the population)
Oh so now you are saying if given a population from which you can easiliy calculate relative frequencies, you will still not be able to use your favorite "limit frequentist" approach to obtain estimates of true probabilities because the process used to sample the population might not be fair. Wow! You have really outdone yourself. If the "limit frequentist" approach is this useless, how come you stick to it, if not just for argumentation purposes?

JesseM said:
There is also such a thing as "finite frequentism" which just says if you have a finite set of N trials, and a given result occurred on m of those trials, then the "probability" is automatically defined as m/N
I have already explained to you multiple times that the list I gave you in the question is an abstract list, not a "result" of "trials". So what you say above is a straw-man argument. And you agreed that I never said anything about trials. So for the last time, be honest about what I asked.
 
  • #1,317
JesseM said:
Bell's theorem, and your odd criticisms of it which seem to presuppose a notion of probability different from the limit frequentist notion
We have been discussing Bell's inequality, NOT Bell's theorem. There is a difference.

JesseM said:
So can you please just answer the question: are you using (or are you willing to use for the sake of this discussion) the limit frequentist notion of probability, where "probability" is just the frequency in the limit as the number of trials goes to infinity?
No! I am not willing to pick and choose the definition of probability for argumentation purposes. First you said it was ONLY the "frequentist" view you wanted. Now it is ONLY a particular variant of frequentism that you want, except when it involves coins and dice, you really use the "bayesian" view. I'm not interested in that type of pointless excercise.


JesseM said:
No, the "standard mathematical definition" of an expectation value involves only the variable whose value you want to find the expectation value for, in this case the product of the two measurement results.
...
In the standard definition would give us:
\sum{i=1}^{N}R_i P(R_i)

Wikipedia: http://en.wikipedia.org/wiki/Expected_value
In general if X is a random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E(X), <X>, \overline{x}, or E(X), is defined as

E(X) = \int_{\Omega}XdP

...

The expected value of an arbitrary function of X, g(X), with respect to the probability density function f(x) is given by the inner product of f and g:
E(g(X))\int_{-\infty}^{\infty} g(X)f(X)

You are way off base. Bell's equation two is the standard mathematical definition.The only difference between Bell's equation (2) and the last equation above, is that the symbols:
X = λ
g(X) = g(λ) = A(a,λ)*B(b,λ)
f(X) = ρ(λ)

Bell is not trying to redefine anything. He is simply using the standard mathematical definition of expectation value for the paired product. Word games will not save you here.

For the last time, you are the one misrepresenting Bell:
Bell said:
E(a,b) = \int \rho (\lambda ) A(a,\lambda )B(b,\lambda ) d\lambda
Note the dλ, at the end of the expression! There is no expression in Bell's paper as the following:
JesseM said:
E(a,b) = (+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (+1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

Your claim that such an expression is missing, because Bell was simplifying for physicists is a cop-out. Furthermore, there is no mention of "limit frequentist", let alone "frequentist" in Bell's paper. You are invoking those terms now only to escape humiliation.

JesseM said:
Hopefully you at least agree that in the limit as the number of trials becomes large, the expression for the empirical average below should approach my definition
Your so called definition is not a definition, but an approximation of a certain expectation value, which is different from the one used by Bell. Bell's expectation value is obtained by integrating over all λ as equation (2) of his paper clearly shows. Yours is discretely added over outcomes without regard for λ. It is a non-starter deliberately designed to avoid the pitfall of the uniform ρ(λ) requirement which you know is fatal to your argument.

JesseM said:
In that case, does your whole argument hinge on the fact that you think Bell's equation (2) was giving an alternate definition of "expectation value", one which would actually differ from the one I give?
You wish. But NO! I have explained my argument very clearly in point-by-point form and in detail. It is your argument which hinges on the hope that Bell's equation (2) is something other than the application of the standard mathematical definition of the expectation value of a paired product to his situation of interest.
 
Last edited:
  • #1,318
True! Bell was writing for an audience of physicists, who would understand that he didn't mean for (2) to indicate he was totally rewriting the standard meaning of "expectation value"
You have provided no proof that equation (2) from Bell's paper is not simply an application of the standard definition of expectation value of a paired product of functions of λ as the wikipedia article clearly shows:
The expected value of an arbitrary function of X, g(X), with respect to the probability density function f(x) is given by the inner product of f and g:
E(g(X))\int_{-\infty}^{\infty} g(X)f(X)

JesseM said:
Please answer my question about whether you are willing to just use the "limit frequentist" notion of probability in this discussion--and if you are, do you see why with this understanding it doesn't make sense to say "ρ(λ) in the sample is not significantly different from ρ(λ) in the population" when you are really just talking about the frequencies of different values of λi in the finite sample, not the frequencies that would be found if we took an infinite sample under the same conditions?

Translation: My argument doesn't make sense in the alternate universe in which your limited frequentist view is what I'm using to make my argument?! Is that the best you can do? I'm done with this rubbish!

JesseM said:
Your answer only seems to address the part that your ability to do this "resorting" doesn't guarantee the value of λ was really the same for all three (and you basically seemed to agree but say it doesn't matter), but you didn't address the point that even the "hidden triples" may be different than the imaginary triples you created via resorting. For example, suppose after resorting we find the 10th iteration of the first run is a=+1,b=-1, the 10th iteration of the second run is b=-1,c=-1 and the 10th iteration of the third is a=+1,c=-1. Then we are free to self-consistently imagine that each run had the same triple for iteration #10, namely a=+1,b=-1,c=-1. However, in reality this might not be the case--for example, the 10th iteration of the first run might actually have been generated from the triple a=+1,b=-1,c=+1. So, the statistics of the imaginary triples you come up with after resorting might not match the statistics of actual triples on each run, or on all three runs combined.

You missed this part

billschnieder said:
If ρ(λ) in the sample is not significantly different from ρ(λ) in the population, then the distribution of the outcomes will not be significantly different. However, just because the distribution of the outcomes is not significantly different is not proof that ρ(λ) is the same. It is a necessary but not a sufficient condition as you still must be able to resort the data.
 
  • #1,319
RUTA said:
I would love to hear Einstein's thoughts about the situation now, given the vast experimental evidence in support of QM over "Einsteinian reality."
Einstein was die hard empiricist. Take for example this remark from his http://www.marxists.org/reference/subject/philosophy/works/ge/einstein.htm" :
"To his [Margenau] Sec. I: "Einstein's position . . . contains features of rationalism and extreme empiricism..." This remark is entirely correct."

You should not expect that Einstein as extreme empiricist would have left matters of interpretation of experiments into hands of Aspect and Zeilinger and would not have formulated his own viewpoint.
 
Last edited by a moderator:
  • #1,320
DevilsAvocado said:
I love Einstein, he’s my hero. The question is – do you think that he would have rejected Bell's Theorem and EPR-Bell experiments?
Let me give longer quote from Einstein http://www.marxists.org/reference/subject/philosophy/works/ge/einstein.htm" :
"One arrives at very implausible theoretical conceptions, if one attempts to maintain the thesis that the statistical quantum theory is in principle capable of producing a complete description of an individual physical system. On the other hand, those difficulties of theoretical interpretation disappear, if one views the quantum-mechanical description as the description of ensembles of systems.

I reached this conclusion as the result of quite different types of considerations. I am convinced that everyone who will take the trouble to carry through such reflections conscientiously will find himself finally driven to this interpretation of quantum-theoretical description (the Psi-function is to be understood as the description not of a single system but of an ensemble of systems).

Roughly stated the conclusion is this: Within the framework of statistical quantum theory there is no such thing as a complete description of the individual system. More cautiously it might be put as follows: The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems. In that case the whole "egg-walking" performed in order to avoid the "physically real" becomes superfluous. There exists, however, a simple psychological reason for the fact that this most nearly obvious interpretation is being shunned. For if the statistical quantum theory does not pretend to describe the individual system (and its development in time) completely, it appears unavoidable to look elsewhere for a complete description of the individual system; in doing so it would be clear from the very beginning that the elements of such a description are not contained within the conceptual scheme of the statistical quantum theory. With this one would admit that, in principle, this scheme could not serve as the basis of theoretical physics. Assuming the success of efforts to accomplish a complete physical description, the statistical quantum theory would, within the framework of future physics, take an approximately analogous position to the statistical mechanics within the framework of classical mechanics. I am rather firmly convinced that the development of theoretical physics will be of this type; but the path will be lengthy and difficult."


As you can see direction taken by EPR-Bell experiments is exactly the one Einstein was talking (dreaming) about. They try to investigate behavior of individual particles.
However their interpretation is mainly based on assumption that QM is valid description of individual systems contrary to what Einstein believed.

So I think that Einstein would have discarded without regret any restrictions placed by orthodox QM on local realistic interpretation.

Restriction I am talking about is that the same measurement settings at both sites should give the same outcome with probability of 1.

DevilsAvocado said:
And how does the Ensemble Interpretation explain if we decide to have very long intervals between every entangled pair in EPR-Bell experiments, let’s say weeks or months? Where is the "Global RAM" situated in a case like this? That fixes the experimentally proved QM statistics, for the whole 'spread out' ensemble??
If we view Ensemble Interpretation as physically realistic interpretation and not as some other metaphysical interpretation we of course can not talk about some "Global RAM".
We can talk only about some "local RAM" that is justifiable by physical dynamics inside equipment used in experiments.

If we decide to have very long intervals between every entangled pair we should expect complete decoherence of entanglement.
 
Last edited by a moderator:
  • #1,321
DevilsAvocado said:
... trying to talk reasonable to Bill is a waste of time. He lives in his own little bubble; firmly convinced he represents the "universe", when the fact is that he’s totally lost and totally alone in his "reasoning".
DA, I don't know if you know this, but billschnieder is a working scientist. I don't think that either DrC or JesseM are.

I admire your honest efforts to understand the conundra surrounding Bell's theorem. I think that billschnieder, and JesseM, and DrC, and all of us are interested in understanding this stuff. And, honestly, I don't think that any of us have a definitive way of expressing anything about the nature of reality.

billschnieder's expertise and knowledge exceeds yours, and I think you should take that into account, just as you apparently do wrt JesseM and DrC.

These are not easy considerations. If they were, then notable physicists and mathematicians wouldn't still be arguing about them. And, while I appreciate your input and your apparent interest, I think you should focus on the precise arguments being made. I'm not sure they're good arguments. Maybe you can sort it out, and clarify it, for all of us. But, please, focus on the the arguments. They're there to be refuted. So, refute them, or agree with them, or just say that you don't understand them -- and ask some questions. But, please, you and nismaratwork, stop with the 'fanboy' stuff.
 
  • #1,322
RUTA said:
I'm with DrC, I also don't believe "the Moon is there when nobody looks." By "when nobody looks" I mean "when not interacting with anything."
And when is it ever the case that something is not interacting with anything?

RUTA said:
In Relational Blockworld, if the entity "isn't there," i.e., is "screened off," it doesn't exist at all. So, the answer to your question is that there is no Moon to wonder.
Come on RUTA, are you saying that your Relational Blockworld is a description of the physical reality?
 
Last edited:
  • #1,323
ThomasT said:
And when is it ever the case that something is not interacting with anything?

When it exhibits wave-like behavior. Once it interacts with its environment, it acquires definite position (particle-like behavior) per decoherence.

RBW is not the only interpretation in which "non-interacting" means "non-existent." I got that idea from Bohr, Ulfbeck and Mottelson. Zeilinger has also been credited with that claim regarding photons.

ThomasT said:
Come on RUTA, are you saying that your Relational Blockworld is a description of the physical reality?

Absolutely, RBW is an ontological interpretation of QM. What in particular strikes you as unreasonable about this ontology? The non-existence of non-interacting entities (manifested as nonseparability of the experimental equipment)? Or, blockworld?
 
  • #1,324
billschnieder said:
I'm done with this rubbish!

I can only hope...

An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!
 
  • #1,325
DrChinese said:
An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!
Are you familiar with GHZ experiments at all?

Anyways
from this paper - http://prl.aps.org/abstract/PRL/v91/i18/e180401"

"In conclusion, we have demonstrated the statistical and nonstatistical conflicts between QM and LR in fourphoton GHZ entanglement. However, it is worth noting that, as for all existing photonic tests of LR, we also had to invoke the fair sampling hypothesis due to the very low detection efficiency in our experiment."

Guess what? Fair sampling hypothesis does not quite hang together with ensemble interpretation.
 
Last edited by a moderator:
  • #1,326
DrChinese said:
I can only hope...

An interesting note for those still reading: GHZ theorem, another no-go theorem for local realism, shows that EVERY trial will have QM and local realism giving opposite predictions. I.e.

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...

Guess which is experimentally demonstrated? No statistics required! No ensemble interpretation required!

Not many places where the use of locality in the usual derivation of GHZ is explicitly explained: do you know any, some, many?
I am talking about Mermin's version using 3 particles, not the original 4 particles configuration.

The use of realism is obvious, of course.
(I will not tell who (among the great expert) thought that locality was not used in GHZ.)

By the way, GHZ is often called: Bell Theorem without Inequality (I mentioned that before as one reason why one should not equate Bell Inequalities
(a form of Boole's inequalities, as pointed out long ago by Itamar Pitowsky in several papers getting deeper and deeper into that matter - this is related to earlier work by Fine) and Bell Theorem as was claimed in a link related to a dispute around billschnieder.
 
  • #1,327
zonde said:
Are you familiar with GHZ experiments at all?

Anyways
from this paper - http://prl.aps.org/abstract/PRL/v91/i18/e180401"

"In conclusion, we have demonstrated the statistical and nonstatistical conflicts between QM and LR in fourphoton GHZ entanglement. However, it is worth noting that, as for all existing photonic tests of LR, we also had to invoke the fair sampling hypothesis due to the very low detection efficiency in our experiment."

Guess what? Fair sampling hypothesis does not quite hang together with ensemble interpretation.

What does Fair Sampling have to do with my comment? If I predict a -1 every time, and you predict +1 every time, and it always comes up -1... Then it doesn't really much matter how often that occurs.

As I have said a million times :smile: all science involves the fair sampling assumption. There is nothing special about GHZ or Bell tests in that regard.

And as I have also said too many times to count: if the GHZ result is due to some unknown weird bias... what is the dataset we are sampling that produces such a result? I would truly LOVE to see you present that one! Let's see:

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...
Actual sample=Oops!
 
Last edited by a moderator:
  • #1,328
Actually, I had the predictions of LR and QM reversed in my little sample. It should be more like:

QM=+1, +1, +1, ...
LR=-1, -1, -1, ...

See this article from Zeilinger and Pan:

Multi-Photon Entanglement and Quantum Non-Locality (2002)

"Comparing the results in Fig. 16.7, we therefore conclude that our experimental results verify the quantum prediction while they contradict the local-realism prediction by over 8standard deviations; there is no local hidden-variable model which is capable of describing our experimental results."
 
  • #1,329
ThomasT said:
DA, I don't know if you know this, but billschnieder is a working scientist. I don't think that either DrC or JesseM are.
In what field? Search arxiv.org for author "Bill Schnieder" turns up no results. Likewise, a general google search for "Bill Schnieder" and "abstract" doesn't seem to turn up any papers with abstracts (and most papers these days have at least the abstracts online). Can you link to any work by him?
 
  • #1,330
billschnieder said:
So you you are saying if you were not exclusing "finite frequentism" you will be able to give an answer?
Only if your list was understood as a statistical sample (either a sample drawn from a larger population, or the results of a series of trials), or if you add some conditions like that an experimenter is picking a sample from the "population" represented by the list. In the first case we could use finite frequentism to give probabilities, in the second case we could even use "limit frequentism" if we added the condition that the experimenter was picking randomly using a method that was equally probable (in the limit frequentist sense) to give any entry on the list.

On the other hand, I've never heard of any authority on statistics talking about determining a "probability" from an "abstract list" which is not interpreted as either a sample or a population. If you want to continue with the "abstract list" argument, please find a source where some authority does something like this.
billschnieder said:
So, you are effectively picking and triming your definition of probability for argumentative purposes as more of your statements will show below.
I had already stated clearly that I was only interested in talking about probabilities defined in the "frequency in limit as number of trials/sample size goes to infinity", since I think these are the only types of probabilities relevant to Bell's derivation. Again, are you willing to at least consider whether Bell's derivation might make sense (and not have the problems of limited applicability you argue for) when his probabilities are interpreted in these terms, or are you basically refusing to consider the possibility of an interpretation of the paper different from your own, suggesting you are not really interested in trying to understand Bell's argument in its own terms but just in making a lawyer-like rhetorical case against him?
JesseM said:
Does your list of four give us enough information to know the frequency of ++ in the limit as the sample size goes to infinity?
billschnieder said:
Bah! This list is the entire context of the question! The list is the population.
You didn't specify that when you first posted the list, and given that all your previous examples of lists of + and -'s involved a series of trials from a run of a given experiment, there was no reason for me to think the list was intended to be something totally different.
billschnieder said:
True probability of the (++) in the list, is the relative frequency of (++) in the list. This is the frequentist approach, which you now want to abandon in order to stay afloat.
This is just a ridiculous criticism, Bill. I have always been using what I now call the "limit frequentist approach" to avoid your quibbling about finite frequentism (most scientists nowadays also just talk about 'frequentism' when they mean limit frequentism), you can see that in every post where I talked about frequentism I explained I was talking about the limit as the number of trials approached infinity. Go on, find a single post of mine where my own use of probability involved anything other than limit frequentist probabilities; you won't be able to, showing that your "you now want to abandon" comment is either based on totally misreading what I've been saying all along, or knowingly misrepresenting it.
billschnieder said:
Hehe! Do you know of anybody who has ever performed an infinite number of coin or die tosses? I think not. So you can not know what the limit will be as the number of tosses tends toward infinity.
No, you can never know with absolute certainty what the "limit frequentist" probabilities are, but you can have a high degree of confidence that they are close to some value based on both theoretical arguments (like the symmetry of fair coins) and empirical averages with large numbers of trials. In any case, Bell's derivation does not require us to actually know what the limit frequentist probabilities of anything are, it just assumes they have some objective values (encapsulated in a function like ρ(λ)) and that these objective values have certain properties (like ρ(λ) being independent of the detector settings), and derives inequalities for the expectation values (themselves just weighted sums of objective probabilities for different combinations of results) based on that. If all of Bell's theoretical assumptions about the objective probabilities were correct, then given the law of large numbers it should be very unlikely that the empirical averages for an experiment with a great many trials would violate the inequality if the "true" expectation values (determined by limit frequentist probabilities) obey it.
billschnieder said:
Furthermore, did you really think I will not notice the fact that you have now abandoned your favorite frequentist approach and now you are using the bayesian approach (see underlined text above) to decide that the P(Heads) = 0.5.
No. First of all, I'm not saying that the P(heads) is actually guaranteed to equal 0.5, just that it's physically plausible that it would be--that's my hypothesis about the true probability, as distinguished from the true probability itself. A theorist who uses limit frequentist definitions when making theoretical arguments about probabilities (like Bell's) is free to use Bayesian methods when trying to come up with an empirical estimate about what the objective probabilities are. But a Bayesian would say the "probability" is just your best estimate (a more 'subjective' definition of the meaning of probability), while a limit frequentist would distinguish between the estimate and the "true" probability.

Second of all, I'm not using the symmetry of the sample space as a basis for my estimate that P(heads)=0.5, I'm using the actual physical symmetry of the coin itself. If I had an irregular coin with more weight on one side than the other I wouldn't make this estimate, despite the fact that the sample space still contains only two possible outcomes so a Bayesian (or Jaynesian) might say the principle of indifference demands our prior distribution assign each outcome an equal probability.

In any case Bell's derivation does not require any estimates of the true limit frequentist probabilities given by ρ(λ). Only once we have derived the inequality do we have to worry about empirical measurements, and here a limit frequentist can just argue that by the law of large numbers, our sample averages are unlikely to differ significantly from the "true" expectation values (determined by the 'true' limit frequentist probabilities) if the number of trials is large enough.
billschnieder said:
I'm sure if I looked, I will not need to look hard to find a post in which you wrote a list not very different from mine and also wrote P(++) to be 1/4 or similar, without having performed an infinite number of damned "trials".
Nope, you won't be able to, I have been quite consistent about understanding probabilities in terms of the limit frequentist approach, since some of my earliest discussions with you--for example see post #91 from the 'Understanding Bell's Logic' thread, posted back in June, where I said:
It's still not clear what you mean by "the marginal probability of successful treatment". Do you agree that ideally "probability" can be defined by picking some experimental conditions you're repeating for each subject, and then allowing the number of subjects/trials to go to infinity (this is the frequentist interpretation of probability, its major rival being the Bayesian interpretation--see the article Frequentists and Bayesians) ... Do you think there are situations where even hypothetically it doesn't make sense to talk about repetition under the same experimental conditions (so even a hypothetical 'God' would not be able to define 'probability' in this way?) If so, perhaps you'd better give me your own definition of what you even mean by the word "probability", if you're not using the frequentist interpretation that I use.
Even earlier than my discussions with you, in January of 2009 I explained to a different poster that I understood derivations of Bell inequalities to involve frequentist probabilities defined in the limit as the number of trials goes to infinity, see this post:
I didn't say anything about you knowing the objective facts. Again, the frequentist idea is to imagine a God's-eye perspective of all the facts, and knowing the causal relations between the facts, figure out what the statistics would look like for a very large number of trials.

...

If you believe there are objective facts in each trial, even if you don't know them, then it should be possible to map any statement about subjective probabilities into a statement about what this imaginary godlike observer would see in the statistics over many trials--do you disagree? For example, suppose there is an urn with two red balls and one white ball, and the experiment on each trial is to pick two balls in succession (without replacing the first one before picking the second), and noting the color of each one. If I open my hand and see that the first one I picked was red, and then I look at the closed fist containing the other and guess if it'll be red or white, do you agree that I should conclude P(second will be white | first was red) = 1/2? If you agree, then it shouldn't be too hard to understand how this can be mapped directly to a statement about the statistics as seen by the imaginary godlike observer. On each trial, this imaginary observer already knows the color of the ball in my fist before I open it, of course. However, if this observer looks at a near-infinite number of trials of this kind, and then looks at the subset of all these trials where I saw that the first ball was red, do you agree that within this subset, on about half these trials it'll be true that the ball in my other hand was white? (and that by the law of large numbers, as the number of trials goes to infinity the ratio should approach precisely 1/2?)

If you agree with both these statements, then it shouldn't be hard to see how any statement about subjective probabilities in an objective universe should be mappable to a statement about the statistics seen by a hypothetical godlike observer in a large number of trials. If you think there could be any exceptions--objectively true statements of probability which cannot be mapped in this way--then please give an example. It would be pretty earth-shattering if you could, because the frequentist interpretation of probabilities is very mainstream, I'm sure you could find explanations of probability in terms of the statistics over many trials in virtually any introductory statistics textbook.
So I think it's safe to say I have been quite consistent in my understanding of what "probability" means in the context of Bell's derivation, and any notion of yours that I've been waffling is just another example of your leaping to an uncharitable conclusion when you see any ambiguity in the way I have expressed myself.
billschnieder said:
You say it is impossible to calculate an answer, then when I give you the answer, you then say the answer is wrong. How do you know it is wrong, if you are unable to calculate the correct one?
What I said in my original response (post #1249) was "No, you can't calculate the probability just from the information provided, not if we are talking about objective frequentist probabilities rather than subjective estimates." If we have some additional information about what the list represents, like that it is a population and we have an experimenter picking a random sample from the population (using a method that we are told has an equal probability of picking any of the four entries, with 'probability' understood in the limit frequentist sense), then we can certainly calculate the probability. I already made this point in post #1277:
Again, you said nothing about "randomly picking" from a list, you just gave a list itself and asked for the probabilities of one entry on that list. If you want to add a new condition about "randomly picking", with "randomly" meaning that you have an equal limit frequentist probability of picking any of the four entries on the list, then in that case of course I agree that P(++)=1/4...well duuuuh! But that wasn't the question you asked.
Now, can we get back to discussing Bell's theorem, and not some silly irrelevant example you came up with to prove I "don't understand probability"?
billschnieder said:
JesseM said:
JesseM said:
Note that the wikipedia article says "close to the expected value", not "exactly equal to the expected value".
JesseM said:
An "expectation value" like E(a,b) would be interpreted in frequentist terms as the expected average result in the limit as the number of trials (on a run with detector settings a,b) goes to infinity
Um, how do you figure? The two statements of mine are entirely compatible, obviously you are misunderstanding something here
It is quite clear from the two statements that if average from the law of large numbers is close to but not equal to the true expectation value, it can not be the definition of the expectation value! Which one is it? The definition of the expectation value can not at the same time be only approximately equal to it!?
I don't understand the phrase "average from the law of large numbers". The average from any finite number of trials N can be different from the true expectation value, no matter how large of a finite number N we pick. However, the law of large numbers says that in the limit as N approaches infinity, the average approaches the expectation value with probability 1. Another way of putting this is that if we pick some specific real number epsilon between 0 and 1, then no matter how small of an epsilon we pick, the probability that the empirical average (the 'sample mean') differs from the expectation value by an amount greater than or equal to epsilon should become smaller and smaller with greater values of N, approaching 0 in the limit as N approaches infinity. If you're familiar with the official calculus definition of a "limit" in terms of the "epsilon-delta" definition (see here), this should look pretty familiar.
JesseM said:
billschnieder said:
You can visualize it by thinking that if you would randomly pick an entry from the the list I gave you
Well, that's an entirely separate question, because then you are dealing with a process that can repeatedly pick entries "randomly" from the list for an arbitrarily large number of trials. But you didn't say anything about picking randomly from the list, you just presented a list of results and asked what P(++) was.
billschnieder said:
It is not an entirely separate question.
It is because my original objection was that your problem didn't give enough "information" for any definite answer, and here you are providing more information (the idea that we are randomly picking entries from the list and want to know the probability of picking a given entry).
billschnieder said:
I did not mention any trials in my question. But you have stated that the only notion of probability you want to use is the "limit frequentist probability", even though initially you just said "frequentist", but if you want to stick to that limited approach, which is only interested in "trials", you could still have provided an answer to the question by imagining what the limit will be if you actually randomly picked items from my list. Is it your claim that this is also impossible?
No, I already told you at the end of post #1277 that this was fine, although you would have to specify that you were picking in a way that gave an equal probability (in limit frequentist terms) of selecting any of the four items on the list, since it's perfectly possible to conceive a method of selection that would make some entries on the list more probable than others (for example, start at the top of the list, if it's 'heads' pick the top entry and if it's 'tails' move to the next entry and repeat this procedure until you either get a heads or get to the last entry on the list...this method gives a probability of 1/2 for picking the first entry, 1/4 for picking the second, 1/8 for picking the third and 1/8 for picking the fourth).
billschnieder said:
Secondly, despite my repeated correction of your false statements that I presented "results" or "trials", you keep saying it. You quickly jumped to claim I never mentioned trials, yet in the next sentence, you say I presented "results", even though I never characterized the list as such, and corrected your attempts to characterize it as such multiple times! You are not being honest.
Yes bill, every time I colloquially use a word like "result" in a way that could possibly be interpreted as a mischaracterization of something you have said, it proves I am a devious snake who is "not being honest" rather than just that I am an ordinary human who sometimes speaks a bit sloppily. Here I did not intend "result" to explicitly mean the results of a series of trials, it could be any list of data (including a list representing a population of possible 'results' that an experimenter might get when picking randomly from the population)
 
Last edited:
  • #1,331
charlylebeaugosse said:
BTW: Someone wrote about the need of mathematical physicists in order to solve any big problem. What have they brought to physics that is acknowledged by the rest of the physicists? I have great respect for them, some of the best ones are my friends, but their contributions are more considered as math. There is a funny story about Simon and Feynman where RF asked BS "who are you young man" to which "BS" answered "I am BS", to which it was replied: and "what is your field?". adn BS comments: can you imagine that F did not know about my work? i.e., for me: BS did not even understand that RF couldn't care less about the type of things he was doing.

I hope that mathematical Physicist will have some recognition as physicists some day. Some of them have deep physical intuition beside tremendous technical power, but so far, ...

Is that supposed to be funny? Modern theoretical physics is mainly mathematical physics, in fact it's been that way for a century or so, the last great achievements by non-mathematicians was probably back in Faraday's time.

The foundations of QM have been debated for nearly a century by many great thinkers, and the conclusion is that nothing will get resolved by "word" arguments about interpretations, there needs to be a model to back up the argument and that model has to be in the language of mathematics.

Of course we need experimental results from which to check our models, and in relation to the question of this thread we have Bell experiments of Aspect et al, GHZ and delayed choice erasure experiments all of which suggest non-locality unless you are a deluded person who thinks a classical explanation makes sense. (The other explanations in terms of reinterpreting reality may have their time, but let's give the physics a chance before opening the gates for the philosophical hordes)

The most promising current model that might account for non-locality seems to be the Holographic Principle, but to properly understand that you need to understand its origins in the work of Bekenstein and Hawking in the 70s on Black Hole Thermodynamics, then you need to understand how it works with current models in String Theory, LQG etc.

This is difficult stuff, with a heavy dose of mathematical formalism. It is the arena where the useful debate about understanding the universe is taking place, not the pseudo philosophical word-play that goes on in these forums.

If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:
This is an interesting question, to which there is no definite answer. On the one hand, since it was possible to quantize the other three interactions without changing the interpretation of QM, it is not obvious that one should not be able to do the same for gravity. If we restrict to `laboratory’ experiments with gravity (even building black holes in the lab), there is no sharp paradox that would require us to modify QM. QM makes us queasy, but if it gives consistent predictions for all processes we may just have to live with that. Things are much less clear when you get to cosmology. Chaotic inflation, for example, does seem to lead to paradoxes, which might be the clue to a deeper understanding of QM

Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)
 
  • #1,332
JesseM said:
Nope, you won't be able to, ...
It took me 2 minutes to find this, and it looks worse than I had thought. Pay attention to how you characterized Bell's expectation value. Also pay attention to how you are factorizing ρ(λ), within the summation. There is no escape.

JesseM said:
When scratched, any given box will reveal either a cherry or a lemon. Once Alice and Bob have both found the fruit behind the box they choose, they can adopt the convention that a cherry is represented by a +1 and a lemon is represented by a -1, and multiply their respective numbers together to produce a single number for each trial (and that single number will itself be +1 if they both got the same fruit, and -1 if they got different fruits). Then we are interested in the "expectation value" for a given choice of boxes--for example, E(a,b') means the average result Alice and Bob will get after multiplying their numbers together on the subset of trials where Alice chose to scratch box a and Bob chose to scratch box b'. The CHSH inequality then states that if we define the value S by S=E(a,b) - E(a,b') + E(a',b) + E(a',b'), then -2 \leq S \leq 2.

As for the hidden states, there are 16 different possibilities (and here I am replacing each fruit with the number they've chosen to represent it, so a=+1 means that the hidden fruit in box a on Alice's card is a cherry):

1: a=+1, a'=+1, b=+1, b'=+1
2: a=+1, a'=+1, b=+1, b'=-1
3: a=+1, a'=+1, b=-1, b'=+1
4: a=+1, a'=+1, b=-1, b'=-1
5: a=+1, a'=-1, b=+1, b'=+1
6: a=+1, a'=-1, b=+1, b'=-1
7: a=+1, a'=-1, b=-1, b'=+1
8: a=+1, a'=-1, b=-1, b'=-1
9: a=-1, a'=+1, b=+1, b'=+1
10: a=-1, a'=+1, b=+1, b'=-1
11: a=-1, a'=+1, b=-1, b'=+1
12: a=-1, a'=+1, b=-1, b'=-1
13: a=-1, a'=-1, b=+1, b'=+1
14: a=-1, a'=-1, b=+1, b'=-1
15: a=-1, a'=-1, b=-1, b'=+1
16: a=-1, a'=-1, b=-1, b'=-1

In this case, define something like A(a,12) to mean "the value Alice gets if she picks a and the hidden state of the two cards is 12", so going by the above we'd have A(a,12)=-1. Similarly B(b',7)=+1, and so forth. And we can assume that there must be well-defined probabilities for each of the possible hidden states, which can be represented with notation like p(8) and p(15) etc.

Since the expectation value E(a,b) is the average value Alice and Bob get when they multiply their results together in the subset of trials where Alice picks box a and Bob picks box b, we should have: E(a,b) = \sum_{N=1}^{16} A(a,N)*B(b,N)*p(N). Likewise, we should also have E(a,b&#039;) = \sum_{N=1}^{16} A(a,N)*B(b&#039;,N)*p(N). Combining these gives:

E(a,b) - E(a,b&#039;) = \sum_{N=1}^{16} [A(a,N)*B(b,N) - A(a,N)*B(b&#039;,N)]*p(N)

With a little creative algebra you can see the above can be rewritten as:

E(a,b) - E(a,b&#039;)
.
\ = \sum_{N=1}^{16} A(a,N)*B(b,N)*[1 \pm A(a&#039;,N)*B(b&#039;,N)]*p(N)
.
\ - \sum_{N=1}^{16} A(a,N)*B(b&#039;,N)*[1 \pm A(a&#039;,N)*B(b,N)]*p(N)
 
Last edited by a moderator:
  • #1,333
unusualname said:
If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:


Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)

Typical response about QM from someone working in unification. I've received similar responses from Witten and Ashtekar (and Smolin in 2002, but his 2006 book shows he's given it more thought since). They're buried in the technical problems associated with the pursuit of a different beast -- unification of the forces and/or quantization of gravity. We need people working on all fronts, but the fronts are too big for anyone person to master them all. Likewise, you might ask the author of a particular interpretation of QM how it bears on unification and receive an equally vague answer. In general, both camps (unification and foundations) agree their problems have a common resolution, they're just working on that resolution from different directions.
 
  • #1,334
RUTA said:
Typical response about QM from someone working in unification. I've received similar responses from Witten and Ashtekar (and Smolin in 2002, but his 2006 book shows he's given it more thought since). They're buried in the technical problems associated with the pursuit of a different beast -- unification of the forces and/or quantization of gravity. We need people working on all fronts, but the fronts are too big for anyone person to master them all. Likewise, you might ask the author of a particular interpretation of QM how it bears on unification and receive an equally vague answer. In general, both camps (unification and foundations) agree their problems have a common resolution, they're just working on that resolution from different directions.

I would think the resolution to QM interpretation will fall out rather easily once the "unification" people hit on the correct microscopic description of reality. I can't see how there could be much useful input the other way.

Of course, once it's all resolved someone will point to a passage in Kant which explained it all hundreds of years ago. :rolleyes:
 
  • #1,335
billschnieder said:
According to you, the wikipedia article is wrong.
No, just that it was failing to adequately distinguish between two notions of the "mean" which could lead to certain readers (you) becoming confused. There weren't any statements that were clearly incorrect.
billschnieder said:
Why don't you correct it.
Your wish is my command. I have edited the opening section of the article to more clearly distinguish between the "sample mean" and the "population mean", and make clear that the expected value is equal to the population mean, not the sample mean:
For a data set, the mean is the sum of the values divided by the number of values. The mean of a set of numbers x1, x2, ..., xn is typically denoted by \bar{x}, pronounced "x bar". This mean is a type of arithmetic mean. If the data set was based on a series of observations obtained by sampling a statistical population, this mean is termed the "sample mean" to distinguish it from the "population mean". The mean is often quoted along with the standard deviation: the mean describes the central location of the data, and the standard deviation describes the spread. An alternative measure of dispersion is the mean deviation, equivalent to the average absolute deviation from the mean. It is less sensitive to outliers, but less mathematically tractable.

If a series of observations is sampled from a larger population (measuring the heights of a sample of adults drawn from the entire world population, for example), or from a probability distribution which gives the probabilities of each possible result, then the larger population or probability distribution can be used to construct a "population mean", which is also the expected value for a sample drawn from this population or probability distribution. For a finite population, this would simply be the arithmetic mean of the given property for every member of the population. For a probability distribution, this would be a sum or integral over every possible value weighted by the probability of that value. It is a universal convention to represent the population mean by the symbol μ.[1] In the case of a discrete probability distribution, the mean of a discrete random variable x is given by taking the product of each possible value of x and its probability P(x), and then adding all these products together, giving \mu = \sum x P(x).[2]

The sample mean may be different than the population mean, especially for small samples, but the law of large numbers dictates that the larger the size of the sample, the more likely it is that the sample mean will be close to the population mean.[3]
As an experiment, let's now see if anyone edits it on the ground that it's incorrect (as opposed to edits for stylistic or other reasons). No fair editing it yourself!
billschnieder said:
It is obvious you are the one who is way off base and you know it.
So, you wish to completely ignore the quotes from various statistics texts I provided? You trust a user-edited site like wikipedia over published texts? Here they are again:
JesseM said:
(edit: See for example this book which distinguishes the 'sample mean' \bar X from the 'population mean' \mu, and says the sample mean 'may, or may not, be an accurate estimation of the true population mean \mu. Estimates from small samples are especially likely to be inaccurate, simply by chance.' You might also look at this book which says 'We use \mu, the symbol for the mean of a probability distribution, for the population mean', or this book which says 'The mean of a discrete probability distribution is simply a weighted average (discussed in Chapter 4) calculated using the following formula: \mu = \sum_{i=1}^n x_i P[x_i ]').
billschnieder said:
All the grandstanding is just a way to stay afloat, not a serious argument agains the well accepted meaning of expectation value.

Wikipedia: http://en.wikipedia.org/wiki/MeanWikipedia: http://en.wikipedia.org/wiki/Expected_value
Neither of these sources claim that the expected value is equal to the "sample mean" (i.e. the average of the results obtained on a series of trials), which is what I thought you were claiming when you said:
billschnieder said:
You are given a theoretical list of N pairs of real-valued numbers x and y. Write down the mathematical expression for the expectation value for the paired product.

...

Wow! The correct answer is <xy>
Of course if the "theoretical list" is supposed to represent a population rather than results from a series of trials, and we assume we are picking randomly from the population using a method that has an equal probability of returning any member from the list, in that case I would agree the answer is <xy>. But once again your statement of the problem didn't provide enough information, because the list could equally well be interpreted as a sample, and in that case the expectation value for the paired product would not necessarily be equal to <xy> since <xy> would just be the sample mean--do you disagree?
JesseM said:
Again, you said nothing about "randomly picking" from a list, you just gave a list itself and asked for the probabilities of one entry on that list.
billschnieder said:
Yes, that is exactly what I did, and you answered that it was impossible to do because you wanted to use ONLY a probability approach that involved "trials".
No I didn't, I just said not enough information was provided. If you specify that the list is intended to be a population and we are picking randomly from the population, that's A-OK with me. I already told you this was fine with me at the end of post #1277.
billschnieder said:
You do the same thing for dice and coins and you have done the same thing in you famous scratch-lotto examples
In the scratch lotto example I explicitly specified that on each trial the experimenters were picking a box at random to scratch, and at some point I bet I even pedantically specified that "at random" means "equal probability of any of the three boxes". With coins and dice it's generally an implicit assumption that each result is equally probable unless the coin/die is specified to be weighted or something.
JesseM said:
Well, excuse me for thinking your question was supposed to have some relation to the topic we were discussing, namely Bell's theorem.
billschnieder said:
While discussing Bell's INEQUALITIES, Not Bell's theorem which we haven't discussed at all
Bell's theorem is just that Bell's inequalities must be obeyed in any local hidden variables theory, and since QM theoretically predicts they will be violated in some circumstances, QM is theoretically incompatible with local hidden variables. Anyway, if you want to be pedantic we're discussing the entirety of Bell's derivation of the inequalities, and whether an analysis of the derivation implies that the inequality is only applicable under some limited circumstances (like it only being applicable to data where it is possible to "resort" in the manner you suggested). My claim is that the correct interpretation of the probabilities in Bell's derivation is that they were meant to be "limit frequentist" probabilities, and that if you look at the derivation with this interpretation in mind it all makes sense, and it shows the final inequalities do not have the sort of limited applicability you claim.
billschnieder said:
and continue to claim that Bell's equation (2) is not a standard mathematical definition for the expectation value of a paired product.
Nope, it's not. The standard mathematical definition for the expectation value of some variable x (whether it is obtained by taking a product of two other random variables A and B or in some other way) is just a sum or integral over all possible values of x weighted by their probabilities or probability densities, i.e. either \mu = \sum_{i=1}^N x_i P(x_i) or \int x \rho(x) \, dx. You can see that this standard expression for the expectation value involves no variables besides x itself. Now depending on the nature of the specific situation we are considering, it may be that functions like P(x) or ρ(x) can themselves be shown to be equal to some functions of other variables, and this is exactly where Bell's equation (2) comes from. Here, I'll give a derivation:

If x is the product of the two measurement results A and B with detector settings a and b, then according to what I said above the "standard form" for the expectation value should be \mu = \sum_{i=1}^N x_i P(x_i), and since we know that this is an expectation value for a certain pair of detector angles a and b, and that the two measurement results A and B are themselves always equal to +1 or -1, this can be rewritten as:

(+1)*P(x=+1|a,b) + (-1)*P(x=-1|a,b) = (+1)*[P(A=+1, B=+1|a,b) + P(A=-1, B=-1|a,b)] + (-1)*[P(A=+1, B=-1|a,b) + P(A=-1, B=+1|a,b)]

Then in that last expression, each term like P(A=+1, B=+1|a,b) can be rewritten as P(A=+1, B=+1, a, b)/P(a,b). So by marginalization (and assuming for convenience that λ is discrete rather than continuous), we have:

P(A=+1, B=+1|a,b) = \sum_{i=1}^N \frac{P(A=+1, B=+1, a, b, \lambda_i )}{P(a,b)}

And P(A=+1, B=+1, a, b, λi) = P(A=+1, B=+1|a, b, λi)*P(a, b, λi) = P(A=+1, B=+1|a, b, λi)*P(λi | a, b)*P(a,b), so substituting into the above sum gives:

P(A=+1, B=+1|a,b) = \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i | a, b)

And if we make the physical assumption that P(λi | a, b) = P(λi) (the no-conspiracy assumption which says the probability of different values of hidden variables is independent of the detector settings), this reduces to

P(A=+1, B=+1|a,b) = \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i )

Earlier I showed that the expectation value, written in its standard form, could be shown in this scenario to be equal to the expression

(+1)*[P(A=+1, B=+1|a,b) + P(A=-1, B=-1|a,b)] + (-1)*[P(A=+1, B=-1|a,b) + P(A=-1, B=+1|a,b)]

So, we can rewrite that as

(+1)*[ \sum_{i=1}^N P(A=+1, B=+1 | a, b, \lambda_i )*P(\lambda_i ) + \sum_{i=1}^N P(A=-1, B=-1 | a, b, \lambda_i )*P(\lambda_i )]+ (-1)*[ \sum_{i=1}^N P(A=+1, B=-1 | a, b, \lambda_i )*P(\lambda_i ) + \sum_{i=1}^N P(A=-1, B=+1 | a, b, \lambda_i )*P(\lambda_i )]

Or as a single sum:

\sum_{i=1}^N P(λi) * [(+1*+1)*P(A=+1, B=+1|a,b,λi) + (-1*-1)*P(A=-1, B=-1|a,b,λi) + (+1*-1)*P(A=+1, B=-1|a,b,λi) + (-1*+1)*P(A=-1, B=+1|a,b,λi)]

And naturally if the value of a along with the specific choice of λi completely determine the value of A, and likewise the value of b along with the specific choice of λi completely determines the value of B (another physical assumption), then for any given i in the sum above, three of the conditional probabilities will be 0 and the other will be 1, so it's not hard to see (tell me if you want this step explained further) why the above can be reduced to:

\sum_{i=1}^N A(a,\lambda_i ) B(b, \lambda_i ) P(\lambda_i )

...which is just the discrete form of Bell's equation (2). So, hopefully you require no further proof that although Bell's equation (2) gives one form of the expectation value, it was not meant to contradict the idea that the expectation value can also be written in the standard form:

(+1)*P(product of A and B is +1) + (-1)*P(product of A and B is -1)

...which given the knowledge that both A and B are always either +1 or -1, and A is the result for the detector with setting a while B is the result for the detector with setting b, can be written as:

E(a,b) = (+1*+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (+1*-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1*+1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (-1*-1)*P(detector with setting a gets result -1, detector with setting b gets result -1)

...which is the equation I have been bringing up over and over. Last time I brought it up, you responded in post #1275 with:
False! The above equation does not appear in Bell's work and is not the expectation value he is calculating in equation (2).
Hopefully the above derivation shows you why Bell's equation (2) is entirely consistent with the above "standard form" of the expectation value, given the physical assumptions he was making. If you still don't agree, please show me the specific step in my derivation that you think is incorrect.
billschnieder said:
Oh so now you are saying if given a population from which you can easiliy calculate relative frequencies, you will still not be able to use your favorite "limit frequentist" approach to obtain estimates of true probabilities because the process used to sample the population might not be fair. Wow! You have really outdone yourself. If the "limit frequentist" approach is this useless, how come you stick to it, if not just for argumentation purposes?
It's useful in theoretical proofs involving probabilities, such as the derivation of the conclusion that Bell's inequality should apply to the "limit frequentist" expectation values in any local realist universe. And for experimental data, as long as the sample size is large we can use empirical frequencies to estimate a range for the limit frequentist probabilities with any desired degree of confidence, even though we can never be 100% confident the true limit frequentist probability lies in that range (but that's just science for you, you can never be 100% sure of any claim based on empirical evidence, even though you can be very very confident).
 
Last edited:
  • #1,336
billschnieder said:
It took me 2 minutes to find this, and it looks worse than I had thought. Pay attention to how you characterized Bell's expectation value. Also pay attention to how you are factorizing ρ(λ), within the summation. There is no escape.
So you had to look to a discussion with a different person from 2009 to find an example? Anyway, if you look closely you'll see that I did mention the assumption that Alice and Bob were picking which box to scratch at random:
The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find the same fruit on at least 1/3 of the trials. For example, if we imagine Bob and Alice's cards each have the hidden fruits A+,B-,C+, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be
Maybe I should have been more explicit about the fact that there was a probability of 1/3 that Alice would scratch a given box on any trial, and likewise for Bob, but that was certainly my implicit assumption. And I have spelled it out more explicitly in other posts, for example this one from a discussion with you in June:
That description is fine, though one thing I would add is that in order to derive the inequality that says they should get the same fruit 1/3 or more of the time, we are assuming each chooses randomly which box to scratch, so in the set of all trials the probability of any particular combination like 12 or 22 is 1/9, and in the subset of trials where they picked different boxes the probability of any combination is 1/6.
So yes, I have always assumed the limit frequentist notion of probability in any of my discussions of the lotto card example, and the post you quoted makes perfect sense with that interpretation if you keep in mind that there is a probability (in limit frequentist terms) of 1/9 that Alice and Bob will pick any given combination of boxes on each trial.

As an aside, can you please edit your post to remove the LaTex code after the words "With a little creative algebra you can see the above can be rewritten as"? The equation there is stretching the window badly, making this page hard to read.
 
Last edited:
  • #1,337
RUTA said:
When it exhibits wave-like behavior. Once it interacts with its environment, it acquires definite position (particle-like behavior) per decoherence.
I like to muse that reality is waves in a hierarchy of media, that the true god's eye view would just see a bunch of interacting waveforms, some bounded or particle-like, and some not, some more persistent than others, etc.

However, at the level of our experience, we see cars and computers and planets and ... moons. I don't think it makes much sense to say that the moon pops into and out of existence depending on whether we happen to be looking at it. The whole quantum-speak thing can get quite silly -- detectors, moons, cats in various 'superpositions' of existing and not existing, of being here and there.

RUTA said:
RBW is not the only interpretation in which "non-interacting" means "non-existent." I got that idea from Bohr, Ulfbeck and Mottelson. Zeilinger has also been credited with that claim regarding photons.
It seems a bit silly to say that there's nothing moving from emitter to detector. Certainly the more sensible inference or hypothesis, and the one that practical quantum physics is based on, is that quantum experimental phenomena result from the instrumental probings of an underlying reality -- a reality which is presumably behaving according to some set of physical principles and which exists whether it's being probed or not.

Einstein's spooky action at a distance entails spacelike separated events determining, instantaneously, each other's existence. This is, prima facie, a nonsensical notion -- and Einstein was right to dismiss it.

RUTA said:
Absolutely, RBW is an ontological interpretation of QM. What in particular strikes you as unreasonable about this ontology? The non-existence of non-interacting entities (manifested as nonseparability of the experimental equipment)? Or, blockworld?
It's not unreasonable. Especially if you're a GR person. I just find it conceptually unappealing. Anyway, is there any way to know to what extent some theoretical construction is a description of 'reality'?
 
  • #1,338
JesseM said:
So can you please just answer the question: are you using (or are you willing to use for the sake of this discussion) the limit frequentist notion of probability, where "probability" is just the frequency in the limit as the number of trials goes to infinity?
billschnieder said:
No! I am not willing to pick and choose the definition of probability for argumentation purposes.
It's not "for argumentation purposes", it's for trying to understand what Bell actually meant, and how the probabilities in his derivation are interpreted by physicists. Your own argument which claims to show the derivation has very limited applicability is based on using a non-limit-frequentist interpretation of the probabilities in Bell's derivation. My claim is that this problem of limited applicability vanishes if we interpret the probabilities in his derivation in limit frequentist terms. Which is more likely a priori, that some guy posting on the internet is the first one to ever discover a major hole in Bell's derivation which never occurred to Bell or any other physicist, or that Bell and other physicists interpreted the probabilities in limit frequentist terms? (which again is a very common way to think about the meaning of probabilities, not some obscure notion I'm dragging up for the sake of being difficult) Are not even willing to consider that he might have been interpreting probabilities this way, to see if the problem of limited applicability would disappear in this case?
billschnieder said:
First you said it was ONLY the "frequentist" view you wanted. Now it is ONLY a particular variant of frequentism that you want
In post #1330 I linked back to an earlier discussion with you where I made clear that I was using "frequentist" probabilities to mean frequencies in the limit as the number of trials goes to infinity (what I am now calling 'limit frequentism' in hopes of avoiding exactly the sort of quibbling you're doing above), and an even earlier discussion with another poster from 2009 where I said the same thing, before you even started posting here. Hopefully this puts to rest the notion that I am somehow shifting my position, and if these posts don't convince you I again challenge you to find any posts by me discussing Bell inequalities where I haven't been talking in limit frequentist terms.
billschnieder said:
except when it involves coins and dice, you really use the "bayesian" view.
Nope, see the three paragraphs in post #1330 starting with "No. First of all, I'm not saying that the P(heads) is actually guaranteed..."
JesseM said:
No, the "standard mathematical definition" of an expectation value involves only the variable whose value you want to find the expectation value for, in this case the product of the two measurement results.
...
In the standard definition would give us:
\sum_{i=1}^N R_i P(R_i )
billschnieder said:
Wikipedia: http://en.wikipedia.org/wiki/Expected_value
<br /> E(g(X))\int_{-\infty}^{\infty} g(X)f(X)<br />
The wikipedia equation calculates an expectation value for a function of X rather than X itself, but the important thing is that the expectation value equations always can be reduced to a sum/integral over the product (possible value of variable in question)*P(variable takes that value), summed or integrated over all possible values. For example, if we define a new variable Y=g(X), then it can be shown that the above equation reduces to E(Y) = \int Y * P(Y), which is the "basic form" I have been talking about. This is easier to see if we consider a discrete X and Y, so we want to show that this:

\sum_i g(x_i )f(x_i )

reduces to this:

\sum_j Y_j * P(Y_j )

First consider the case in which each xi gives a unique Yj when plugged into g(x). Then in that case, the probability of a given Yj is naturally going to be the same as the probability of the corresponding xi, and Yj is equal to xi, so the above will be satisfied. On the other hand, suppose there are multiple possible values of xi which, when plugged into g(x), would give the same Yj. Then for that value of j, it is true that P(Yj) = (sum over all values of i for which g(xi)=Yj) f(xi). So in that case, it must be true that for a specific value of j, Yj*P(Yj) = (sum over all values of i for which g(xi)=Yj) g(xi)*f(xi). So from this it's not hard to see why \sum_i g(x_i )f(x_i ) reduces to \sum_j Y_j * P(Y_j )...I don't feel like writing out a formal proof, but if you don't see why what I said above guarantees it, just imagine we have five possible values of x, namely x1, x2, x3, x4, x5, and only two possible values of Y, Y1 and Y2, such that g(x1) = g(x3) = g(x4) = Y1, and g(x2) = g(x5) = Y2. Then if we write out \sum_i g(x_i )f(x_i ) it would be:

g(x1)*f(x1) + g(x2)*f(x2) + g(x3)*f(x3) + g(x4)*f(x4) + g(x5)*f(x5)

And since g(x1) = g(x3) = g(x4) and g(x2) = g(x5), we can gather together terms as follows:

g(x1)*[f(x1) + f(x3) + f(x4)] + g(x2)*[f(x2) + f(x5)]

And since g(x1)=Y1 and g(x2) = Y2, and since P(Y1) = [f(x1) + f(x3) + f(x4)] and P(Y2) = [f(x2) + f(x5)], the above reduces to:

Y1*P(Y1) + Y2*P(Y2)

...which is just \sum_j Y_j * P(Y_j ). Hopefully you can see how this would generalize to arbitrary sums \sum_i g(x_i )f(x_i ) and \sum_j Y_j * P(Y_j ), where every g(xi) yields some Yj.
billschnieder said:
You are way off base. Bell's equation two is the standard mathematical definition.The only difference between Bell's equation (2) and the last equation above, is that the symbols:
X = λ
g(X) = g(λ) = A(a,λ)*B(b,λ)
f(X) = ρ(λ)
The simplest mathematical definition deals not with the expectation value of a function of a random variable, but an expectation value of the random variable itself; i.e. not E(g(X)) = \int_{-\infty}^{\infty} g(X)f(X) but rather E(Y) = \int Y*P(Y). In any case, I don't really want to discuss the definition of "simple", my claim is just that all expectation values must reduce to that last form, and this is true of Bell's equation (2) as I showed in the derivation near the end of post #1335.
billschnieder said:
Bell is not trying to redefine anything. He is simply using the standard mathematical definition of expectation value for the paired product.
Any mathematician would understand that whatever form we choose to write an "expectation value", it can always be reduced to the form E(Y) = \int Y*P(Y). Since Bell was ultimately computing the expectation value for the product of the two measurements, if we let Y equal the sum of the two measurements it must be true that his expression can be reduced to (sum over all possible values of Y) Y*P(Y). And I showed that such a reduction is in fact possible (given Bell's physical assumptions) in post #1335.
billschnieder said:
Note the dλ, at the end of the expression! There is no expression in Bell's paper as the following:
E(a,b) = (+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (+1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
Your claim that such an expression is missing, because Bell was simplifying for physicists is a cop-out.
No, it's just something that would be understood implicitly by anyone well-versed in probability theory, it isn't necessary to state the obvious. But since it's not obvious to you, again see the explicit derivation in post #1335.
billschnieder said:
Furthermore, there is no mention of "limit frequentist", let alone "frequentist" in Bell's paper. You are invoking those terms now only to escape humiliation.
I have already linked to posts dating way back where I explained that I interpreted Bell's probabilities in terms of the frequencies in the limit as number of trials goes to infinity, so the idea that I am changing my tune to "escape humiliation" is silly. And no, Bell doesn't mention limit frequentism, but he also doesn't mention any other notion of probability like finite frequentism or Bayesianism, so it's up to readers to interpret the meaning of "probability" in Bell's paper. Again, limit frequentism is pretty much the default assumption in theoretical proofs involving probabilities in science, but even if this weren't true, the mere fact that his derivation has some major holes when his probabilities interpreted in non-limit-frequentist terms, but these holes might disappear when we interpret his probabilities in limit frequentist terms (that is my assertion anyway), is good enough reason for you to at least consider that he might have meant the probabilities in this way before triumphantly proclaiming you have found a flaw in Bell's reasoning that has somehow escaped the notice of every physicist who studied it until now. At least, you should consider this possibility if you have any intellectual integrity and want to do your best to figure out what Bell meant, as opposed to just wanting to make a rhetorical case against him by picking an interpretation designed to make him look bad.
 
  • #1,339
unusualname said:
I would think the resolution to QM interpretation will fall out rather easily once the "unification" people hit on the correct microscopic description of reality. I can't see how there could be much useful input the other way.

That's certainly the majority opinion. I think the best the foundations community can hope for is to find a new approach to unification, whereas a unified theory would certainly resolve all foundational issues.

As an example of how work in the foundations community might bear on the unification effort, our QM interpretation (Relational Blockworld) suggests a nonseparable Regge calculus approach to classical gravity (where nonseparable means "direct action" in the path integral approach). Obviously, changing classical gravity from Regge calculus (discrete, path integral version of GR) to nonseparable (direct action) Regge calculus, changes the quantum gravity program. It also changes what is meant by "unification," since the dynamical perspective, and therefore forces, are no longer part of a fundamental approach.

I didn't bring up unification per RBW to debate its merits, but merely to point out how the foundations community might contribute to the larger program of unification.
 
  • #1,340
ThomasT said:
However, at the level of our experience, we see cars and computers and planets and ... moons. I don't think it makes much sense to say that the moon pops into and out of existence depending on whether we happen to be looking at it. The whole quantum-speak thing can get quite silly -- detectors, moons, cats in various 'superpositions' of existing and not existing, of being here and there.

For most of us the phrase "not there when nobody looks" is simply a metaphor for the non-existence of non-interacting entities.

ThomasT said:
It seems a bit silly to say that there's nothing moving from emitter to detector. Certainly the more sensible inference or hypothesis, and the one that practical quantum physics is based on, is that quantum experimental phenomena result from the instrumental probings of an underlying reality -- a reality which is presumably behaving according to some set of physical principles and which exists whether it's being probed or not.

Einstein's spooky action at a distance entails spacelike separated events determining, instantaneously, each other's existence. This is, prima facie, a nonsensical notion -- and Einstein was right to dismiss it.

Well, if QM is right, one (or both) of these things has to go -- you can't have realism and locality. In our interpretation, we punt on realism, i.e., separability.

ThomasT said:
It's not unreasonable. Especially if you're a GR person. I just find it conceptually unappealing.

Most do :smile:

ThomasT said:
Anyway, is there any way to know to what extent some theoretical construction is a description of 'reality'?

That's a thorny epistemological question. Better leave that for another thread.
 
  • #1,341
billschnieder said:
There is no expression in Bell's paper as the following:
JesseM said:
E(a,b) = (+1)*P(detector with setting a gets result +1, detector with setting b gets result +1) + (-1)*P(detector with setting a gets result +1, detector with setting b gets result -1) + (-1)*P(detector with setting a gets result -1, detector with setting b gets result +1) + (+1)*P(detector with setting a gets result -1, detector with setting b gets result -1)
Your claim that such an expression is missing, because Bell was simplifying for physicists is a cop-out.
Incidentally, in case Bill or anyone else has any further doubts on this point, note that on p. 14 of the paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers Bell does write the expectation value in a basically identical form in equation (13):

E(a,b) = P(yes, yes|a,b) + P(no, no|a,b) - P(yes, no|a,b) - P(no, yes|a,b)
 
Last edited by a moderator:
  • #1,342
JesseM said:
Incidentally, in case Bill or anyone else has any further doubts on this point, note that on p. 14 of the paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers Bell does write the expectation value in a basically identical form in equation (13):

E(a,b) = P(yes, yes|a,b) + P(no, no|a,b) - P(yes, no|a,b) - P(no, yes|a,b)

I think you need more, like killing a werewolf... take the heart, the head, and burn the body. Even then, I somehow doubt that billy will concede anything. I enjoyed reading the paper however.

ThomasT: Why does the appealing or unappealing nature of an ontology matter? The only thing that is relevant is matching with empirical evidence, the science, and the math. I find the inevitability of death quite unappealing, but I don't doubt it as a result.
 
Last edited by a moderator:
  • #1,343
DrChinese said:
What does Fair Sampling have to do with my comment? If I predict a -1 every time, and you predict +1 every time, and it always comes up -1... Then it doesn't really much matter how often that occurs.
In this experiment photons are created in H/V base but measurements are performed in +45/-45 base (measurement x) and L/R base (measurment y).
If you measure linearly polarized light in base that is rotated by 45° you get completely uncertain result - +1 and -1 have equal probabilities.
If you measure linearly polarized light in circular polarization base you get completely uncertain result as well - +1 and -1 have equal probabilities.
So without detection bias prediction for any measurement in this case is 0.5. That means that composed result from all involved measurements (mind you not output but calculation composed of many different outputs and provided you have algorithm for that) half of the time gives -1 and half of the time gives +1 without detection bias.

Because both involved measurements does not give definite result without detection bias GHZ is comparison of two different detection biases.
So this type of experiments is pure test of fair sampling without involvement of definite outcomes based on particle properties.

DrChinese said:
As I have said a million times :smile: all science involves the fair sampling assumption. There is nothing special about GHZ or Bell tests in that regard.
I can not claim that I have said this a million times but I have responded like this at least once already:

Yes, that's right. All the science rely on different approximations including fair sampling assumption. But all the science except QM does not blame reality, causality and whatever else when it discovers contradiction in it's conclusions. Instead it admits error and reexamines it's assumptions (including fair sampling assumption) one by one until it resolves contradiction.
So all the science involves the fair sampling assumption but all the science have quite strict rules when to give up fair sampling assumption.

DrChinese said:
And as I have also said too many times to count: if the GHZ result is due to some unknown weird bias... what is the dataset we are sampling that produces such a result? I would truly LOVE to see you present that one! Let's see:

LR=+1, +1, +1, ...
QM=-1, -1, -1, ...
Actual sample=Oops!

DrChinese said:
Actually, I had the predictions of LR and QM reversed in my little sample. It should be more like:

QM=+1, +1, +1, ...
LR=-1, -1, -1, ...
From the article you linked:
"First, one performs yyx, yxy, and xyy experiments. If the results obtained are in agreement with the predictions for a GHZ state, then the predictions for an xxx experiment for a local realist theory are exactly opposite to those for quantum mechanics."

So the dataset consists of outcomes for each of yyx, yxy, xyy and xxx experiments.
There are 8 possible different outcomes for each of those 4 experiments that are not even conducted at the same time. 16 of those possible different outcomes (8 outcomes * 4 experiments) are observed much more frequently than other 16. So please provide an algorithm how you get your "output" from 32 different outputs observed in experiment at different times for different setups.

DrChinese said:
See this article from Zeilinger and Pan:

Multi-Photon Entanglement and Quantum Non-Locality (2002)

"Comparing the results in Fig. 16.7, we therefore conclude that our experimental results verify the quantum prediction while they contradict the local-realism prediction by over 8standard deviations; there is no local hidden-variable model which is capable of describing our experimental results."
From the same article:
"If we assume the spurious events are just due to experimental errors, we can thus conclude within the experimental accuracy that for each photon, 1, 2 and 3, quantities corresponding to both x and y measurements are elements of reality. Consequently, a local realist, if he accepts that reasoning, would thus predict that for a xxx experiment only the combinations V'V'V',H'H'V',H'V'H', and V'H'H' will be observable (Fig. 16.6b)."

This type of reasoning is not only dispensable for ensemble interpretation but it is even contradicting ensemble interpretation. That's because it completely ignores the role of ensemble in determining outcome of measurement.
 
  • #1,344
zonde said:
So this type of experiments is pure test of fair sampling without involvement of definite outcomes based on particle properties. So all the science involves the fair sampling assumption but all the science have quite strict rules when to give up fair sampling assumption.

From the article you linked:
"First, one performs yyx, yxy, and xyy experiments. If the results obtained are in agreement with the predictions for a GHZ state, then the predictions for an xxx experiment for a local realist theory are exactly opposite to those for quantum mechanics."
...

So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.
 
  • #1,345
zonde said:
In this experiment photons are created in H/V base but measurements are performed in +45/-45 base (measurement x) and L/R base (measurment y).
If you measure linearly polarized light in base that is rotated by 45° you get completely uncertain result - +1 and -1 have equal probabilities.
If you measure linearly polarized light in circular polarization base you get completely uncertain result as well - +1 and -1 have equal probabilities.
So without detection bias prediction for any measurement in this case is 0.5.
What about this you do not understand?

DrChinese said:
So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.
Yes of course. You tell me what I predict and then easily refute my prediction.
You know how this is called?
A strawmen.
 
  • #1,346
zonde said:
1. What about this you do not understand?

2. Yes of course. You tell me what I predict and then easily refute my prediction.
You know how this is called?
A strawmen.

1. Nothing. What's your point?

2. You are the local realist, what do YOU predict for the xxx case? Does it match QM or not?
 
  • #1,347
charlylebeaugosse said:
Max Jammer indeed, but the book is (in Amazon):

The Philosophy of Quantum Mechanics: The Interpretations of Quantum Mechanics in Historical Perspective by Max Jammer (Hardcover - June 1974)
5 used from $79.99

5.0 out of 5 stars (1)

Fines book is:
The Shaky Game (Science and Its Conceptual Foundations series) by Arthur Fine (Paperback - Dec. 15, 1996)
Buy new: $25.00 $22.28

12 new from $21.00
13 used from $17.58

Get it by Friday, Aug. 13 if you order in the next 22 hours and choose one-day shipping.
Eligible for FREE Super Saver Shipping.
Only 3 left in stock - order soon.
3.5 out of 5 stars (2)

There is from there an easy way to get to original writings by Einstein.

As for Einstein's realism, he did believe that the Moon did not even need apes I would bet.
But the real issue, I think, is realism at the microscopic level.
So you think that he would not have been a microscopic realist in the EPR sense? Specifically, if two entangled particles can each be measured on either of two or more noncommuting properties X and Y (like position and momentum), and measuring the value of property X for particle #1 allows us to determine with probability 1 what the value of property X would be for particle #2 if we measured property X for particle #2, then I understand the EPR paper to suggest this means there must be a local "element of reality" associated with particle #2 that predetermines the result it would give for a measurement of property X, even if we actually measure property Y for particle #2.

This quote by Einstein from p. 5 of Bell's paper http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers does suggest to me he favored microscopic realism in the EPR sense:
If one asks what, irrespective of quantum mechanics, is characteristic of the world of ideas of physics, one is first of all struck by the following: the concepts of physics relate to a real outside world ... It is further characteristic of these physical objects that they are thought of as arranged in a space time continuum. An essential aspect of this arrangement of things in physics is that they lay claim, at a certain time, to an existence independent of one another, provided these objects "are situated in different parts of space".

The following idea characterizes the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B ...

There seems to me no doubt that those physicists who regard the descriptive methods of quantum mechanics as definitive in principle would react to this line of thought in the following way: they would drop the requirement ... for the independent existence of the physical reality present in different parts of space; they would be justified in pointing out that the quantum theory nowhere makes explicit use of this requirement.

I admit this, but would point out: when I consider the physical phenomena known to me, and especially those which are being so successfully encompassed by quantum mechanics, I still cannot find any fact anywhere which would make it appear likely that (that) requirement will have to be abandoned.

I am therefore inclined to believe that the description of quantum mechanism ... has to be regarded as an incomplete and indirect description of reality, to be replaced at some later date by a more complete and direct one.
charlylebeaugosse said:
I am only bothered a lot by all lies and false info that have lead us to a situation where more physicist (in or close to QM) would relinquish locality and not realism at the miscroscopic level.
Do you really think it's true that "most physicists" would prefer to relinquish locality and not realism? If that were the case I would think Bohmian mechanics would be much more popular! Instead it seems to me that both the Copenhagen interpretation (which abandons 'realism') and the Many-worlds interpretation (whose 'realist' status depends somewhat on how you define 'realism', but it is an interpretation that many advocates say is a completely local one, see my post #8 on this thread for some references along with my own toy model illustrating how a local interpretation involving multiple copies of each experimenter can explain Bell inequality violations without being non-local) are a lot more popular, see some of the polls linked to here.
charlylebeaugosse said:
Also, I only add my physicist's sensitivity to real work done by Jammer and Fine (see also the conference where Fine (?), Jammer, Peirls and Rosen contributed for the 50th anniversary of EPR and other paper here and there, mostly the correspondence of Einstein (mainly with Born, but there are other gems), the Schlipp book, and one pocket book on AE's views on the world where there is more politics than physics but some good pieces anyway) and as much reading of Einstein as I could put my hands on. But as I do not read German, I loose lots of first hand material.
Any chance you could post some of Einstein's quotes that you think show he was not a "naive realist" or would not have agreed with the ideas in the EPR paper? If it would take too long to find them and type them up, I will understand of course.
 
Last edited by a moderator:
  • #1,348
DrChinese said:
So I predict that every boy is male and you predict every boy is female. These are the kind of opposite predictions we make (it's an analogy :smile: ). I provide a random but potentially biased sample which consists of all male boys to 8 standard deviations. Now, exactly how is it that we always get male boys? For this to be science - your claim that is - you need to show me a reeeeeeeeeeeeeally big batch of female boys. Where are they?

This is the strict requirement you speak of. It applies to YOU, my friend. You can't claim it is science without showing something! Absence of evidence actually is evidence of absence when it comes to sampling.

DrC is so right here: see the papers or books on GHZ. The contradiction occurs on every occurrence. Contrary to Bell's inequality- based Bell's Theorem, the GHZ sort, "Bell's Theorem without inequalities" does not use any statistical hypothesis as the story is:
Realism + Locality => An false equality (for each samplE, rather than for some ideal samplING).

No I had asked if anyone has seen a nice explanation of how locality is used. Any hint?
 
  • #1,349
unusualname said:
Is that supposed to be funny? Modern theoretical physics is mainly mathematical physics, in fact it's been that way for a century or so, the last great achievements by non-mathematicians was probably back in Faraday's time.

The foundations of QM have been debated for nearly a century by many great thinkers, and the conclusion is that nothing will get resolved by "word" arguments about interpretations, there needs to be a model to back up the argument and that model has to be in the language of mathematics.

Of course we need experimental results from which to check our models, and in relation to the question of this thread we have Bell experiments of Aspect et al, GHZ and delayed choice erasure experiments all of which suggest non-locality unless you are a deluded person who thinks a classical explanation makes sense. (The other explanations in terms of reinterpreting reality may have their time, but let's give the physics a chance before opening the gates for the philosophical hordes)

The most promising current model that might account for non-locality seems to be the Holographic Principle, but to properly understand that you need to understand its origins in the work of Bekenstein and Hawking in the 70s on Black Hole Thermodynamics, then you need to understand how it works with current models in String Theory, LQG etc.

This is difficult stuff, with a heavy dose of mathematical formalism. It is the arena where the useful debate about understanding the universe is taking place, not the pseudo philosophical word-play that goes on in these forums.

If you ask the current great Physicists about QM interpretations they will probably admit we are no nearer a resolution, but they do at least know what they're talking about, here's what Joe Polchinski has to say about the fact that String Theory does not attempt to solve the interpretation problem:


Where in the last sentence he hints at MWI, but as you can see he's more interested in hard physics, not philosophical fluff, (quote taken from his comments in this blog entry replying to Smolin's The Trouble with Physics)
I was a bit jocking, but how many Nobel prizes in physics cover papers whose main content was one or more theorems (in a sense accepted by mathematicians). And isn't it true that most mathematical physicists are homed in math dept (a bit less since super-string took control of the budgets in HEPhysics, but
1) what proportion of physicists consider superstring .
2) what proportion of physicists consider superstring as physics.
My main points was in fact that such statements and questions (including mine here) seem far from the subject, and far from physics. As I said, I have an immense consideratio for mathematical physicists.
 
  • #1,350
zonde said:
Let me give longer quote from Einstein essay:

But... this is an essay from 1949. How can this relate to Bell's Theorem?

zonde said:
So I think that Einstein would have discarded without regret any restrictions placed by orthodox QM on local realistic interpretation.

I don’t agree. As you state yourself:

zonde said:
Einstein was die hard empiricist.

I absolutely do not think Einstein would start looking for farfetched loopholes etc. He was way too smart for that. I think he would have accepted the situation, for the start of something new.

zonde said:
Restriction I am talking about is that the same measurement settings at both sites should give the same outcome with probability of 1.

Well, this is pretty obvious, isn’t it?? The completely "new thing" is when polarizers are nonparallel!? Einstein would of course immediately have realized that his own argument had boomeranged on him:
no action on a distance (polarisers parallel) ⇒ determinism
determinism (polarisers nonparallel) ⇒ action on a distance

zonde said:
If we view Ensemble Interpretation as physically realistic interpretation and not as some other metaphysical interpretation we of course can not talk about some "Global RAM".
We can talk only about some "local RAM" that is justifiable by physical dynamics inside equipment used in experiments.

If we decide to have very long intervals between every entangled pair we should expect complete decoherence of entanglement.

Are you saying that if we run an EPR-Bell experiment as I proposed, we "should expect complete decoherence of entanglement" and the experiment would fail? No expected QM statistics??
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top