Is Bell's theorem near-sighted?

In summary: I don't see what the big deal is. You can't explain the observed correlations with a local hidden variable model. That means that your lhv model is wrong, and that the observations are consistent with qm. That's all there is to it.In summary, Bell's theorem has proven that no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. This is due to the locality condition, which states that events at one end must be independent of events at the other end. Despite attempts to find a way around this condition, no sufficiently complex model has been able to match all of the quantum mechanical predictions. This suggests that our understanding of spin and polarization is accurate and that the
  • #1
spenserf
16
1
I know Bell has been discussed ad nauseam on these forums, but there's something I'm having trouble coming to terms with: the wording and usage of Bell's theorem seems to be near-sighted. "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics." How is it so that no sufficiently complex model could ever be conceived which produces the same predictions? For example: in he case of measuring the polarization or spin of entangled photons, QM predicts a correlation proportional to the cosine of the difference between the two angles in which the photons are measured. It's claimed that ANY local realist model would predict a linear correlation. How can this be said with any real confidence? Is it possible that we simply don't fully understand how spin and polarization are quantized? Is it possible that there are hidden variables, but the machinery is so complex as to appear non-linear?
 
Physics news on Phys.org
  • #2
How is it so that no sufficiently complex model could ever be conceived which produces the same predictions?
It would mean that Bell's inequality is wrong - which means that a huge chunk of quantum mechanics is wrong.
Probably enough to call into question the mathematical foundation of the standard model in particle physics.
This is big league stuff.

That may be the case - we certainly do not understand everything - but proving it would be bigger than the Nobel prizes.
Look through the literature and you'll see that people have been trying really complex systems and trying to catch a hidden variable.

If it helps - think of it as a statement conditional upon a particular model holding true.
It's a prediction of what you'd have to do to prove the model false.
 
  • #3
spenserf said:
"No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

How is it so that no sufficiently complex model could ever be conceived which produces the same predictions? For example: in he case of measuring the polarization or spin of entangled photons, QM predicts a correlation proportional to the cosine of the difference between the two angles in which the photons are measured. It's claimed that ANY local realist model would predict a linear correlation. How can this be said with any real confidence? Is it possible that we simply don't fully understand how spin and polarization are quantized? Is it possible that there are hidden variables, but the machinery is so complex as to appear non-linear?

There are ways to convince yourself*, here's one. Simply try to prepare a set of results that fit these requirements:

a) Matches the predictions of QM for a difference of 120 degrees.
b) Provides perfect correlations at the same angles.
c) Is realistic, ie has values for angles regardless of whether they are actually tested.

Example:
0 degrees / 120 degrees / 240 degrees
+ + -
- + -
etc. (after you put together about 10 or 15 you will see the effect, which is that you cannot get sufficiently close to what the quantum mechanical prediction is).

You cannot prepare such a data set by hand, even knowing what you want to accomplish! That is why no model can exist - there is no possible result set that fits the above bill unless you allow some kind of non-local communication between Alice and Bob.

*I would strongly recommend that you familiarize yourself with the core of the Bell proof first. Or you can ignore nearly 50 years of research and analysis on the matter. This has been hashed through on many posts here, you can look back at some of those too.
 
  • #4
What Simon Bridge and DrChinese said. But I'll add my two cents.

spenserf said:
I know Bell has been discussed ad nauseam on these forums, but there's something I'm having trouble coming to terms with: the wording and usage of Bell's theorem seems to be near-sighted. "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics." How is it so that no sufficiently complex model could ever be conceived which produces the same predictions?
Because of the locality condition that events at one end be independent of events at the other end. The quote you're questioning isn't just a generalization. It's a mathematical fact that Bell proved almost 50 years ago. An lhv model can be as complex as you want to make it, but if its explicitly local then it can't match all qm predictions.

spenserf said:
For example: in he case of measuring the polarization or spin of entangled photons, QM predicts a correlation proportional to the cosine of the difference between the two angles in which the photons are measured. It's claimed that ANY local realist model would predict a linear correlation. How can this be said with any real confidence?
Because it's been mathematically proven. The only question, and it's apparently what keeps lhv fans hopeful, is whether or not a locality condition (that's clearly a locality condition) can be discovered which doesn't place the same restrictions on an lhv model as Bell's locality condition does. But nobody's found one yet, and it looks like a lost cause.

spenserf said:
Is it possible that we simply don't fully understand how spin and polarization are quantized?
Not likely. The qm predictions have been correct so far.

spenserf said:
Is it possible that there are hidden variables, but the machinery is so complex as to appear non-linear?
There are hidden variables. That's not in dispute.

The observed correlations are nonlinear. There's nothing else to refer to.
 
Last edited:
  • #5
Thanks for the responses. But so far these answers are of the "just because" form. I am familiar with the theorem. Indeed, DrChinese, I've ready your own write-ups on both Bell and EPR in the past and found them helpful. But perhaps the central question was missed. The theorem is predicated on local hidden variables producing a linear correlation in measurement with regard to the measuring angle. Why is this so? I understand quite well that QM predicts a variable/nonlinear correlation and that observation verifies it. I understand that the proofs are thorough regarding the ways in which linear correlation =/= the types of correlations in measurements we see in QM. But specifically, why can there never be a LHV model which produces nonlinear correlations with regard to measurement angle?

Consider the attached image. It represents a scenario in which you are trying to measure the color of a point on a single sphere (which is divided in half into two colors and which is in a random orientation) and a corresponding point at some angle relative to the first point. A is the first measured point, B a the second, and C a the third. The angle between A and B is θ, as is the angle between B and C, with respect to a 2 dimensional measuring plane shown in green. The probably of getting a matching color on B is proportional to the portion of the circumference marked in blue which is the same color as A. If you run this for all possible alignments of D (the axis of the sphere) then you get the same correlation between A and B and between B and C, but because the portion of correlated points on the illustrated circumference does not vary linearly with the angle difference, you will find that the probability of correlation between A and C varies nonlinearly with respect to θ.
 

Attachments

  • nonlinear.jpg
    nonlinear.jpg
    6.5 KB · Views: 472
  • #6
spenserf said:
Thanks for the responses. But so far these answers are of the "just because" form. I am familiar with the theorem. Indeed, DrChinese, I've ready your own write-ups on both Bell and EPR in the past and found them helpful. But perhaps the central question was missed. The theorem is predicated on local hidden variables producing a linear correlation in measurement with regard to the measuring angle. Why is this so? I understand quite well that QM predicts a variable/nonlinear correlation and that observation verifies it. I understand that the proofs are thorough regarding the ways in which linear correlation =/= the types of correlations in measurements we see in QM. But specifically, why can there never be a LHV model which produces nonlinear correlations with regard to measurement angle?

Consider the attached image. It represents a scenario in which you are trying to measure the color of a point on a single sphere (which is divided in half into two colors and which is in a random orientation) and a corresponding point at some angle relative to the first point. A is the first measured point, B a the second, and C a the third. The angle between A and B is θ, as is the angle between B and C, with respect to a 2 dimensional measuring plane shown in green. The probably of getting a matching color on B is proportional to the portion of the circumference marked in blue which is the same color as A. If you run this for all possible alignments of D (the axis of the sphere) then you get the same correlation between A and B and between B and C, but because the portion of correlated points on the illustrated circumference does not vary linearly with the angle difference, you will find that the probability of correlation between A and C varies nonlinearly with respect to θ.

The candidate local realistic model doesn't have to be linear at all. You will notice that was not one of the criteria I supplied above. The reason anyone mentions the linear issue is such models most closely match QM. But you CANNOT have ANY realistic model that matches QM (unless of course there is a non-local component).

So you will see that your example cannot be both realistic and match QM. Again, try to generate triplets at A=0/B=120/C=240 degrees and you will see it cannot be done. And don't forget you need perfect correlations too! So AB=BC=AC=.25 (or .75 depending on whether you are modeling Type I or Type II PDC). The best you will be able to do is AB=BC=AC=.333 (or .666).

Normally, when this subject comes up, the first thing you want to do is to ignore the realism requirement. That requirement is, to be precise, the idea that counterfactual outcomes can be presented (i.e. you can provide an value for any hypothetical measurement). Then the locality requirement is that such outcomes are independent of actual choice of measurement elsewhere. What Bob chooses to measure does not affect Alice's outcome.

If you deny that counterfactual outcomes exist, or need to be supplied, then you are simply accepting the standard viewpoint and agreeing with Bell.
 
  • #7
Or to put it another way: the linear idea is not a stringent requirement. More anecdotal than anything. It is absolutely not part of the formalism. So objecting to that is not an undercut to Bell.
 
  • #8
As an aside, Christopher Fuchs has pointed out that some physicists do interpret Bell's to imply non-locality because Bell's inequality only assumes locality not realism:
There is a coterie within the quantum-foundations wars (which included John Bell himself and has modern spokesmen in David Albert, Nicolas Gisin, and Travis Norsen) that claim that the only implication of the Bell-inequality violations is nonlocality-in other words, that it is not the dichotomous choice between nonlocality and “unperformed experiments have no results” (or both) that we have been claiming.
Ghirardi argues similarily:
I have known John Bell in person, and I have seen him, on various occasions, become terribly upset at suggestions that the derivation of his inequality may require assumptions other than locality. As it can be easily deduced from all of his writings, Bell always emphasized that whenever he was using the hypothesis of determinism or realism, he was actually assuming locality, and he then derived these further assumptions from the logical conjunction of locality and the validity of the perfect quantum correlations.
But this is still the minority position, however.
 
  • #9
DrChinese said:
There are ways to convince yourself*, here's one. Simply try to prepare a set of results that fit these requirements:
In 1985 David Mermin wrote a fantastic article for Physics Today, "Is the moon there when nobody looks?", explaining this setup. It's how I always explain Bell to neophytes. A quick google search turned up this link to it:

www.iafe.uba.ar/e2e/phys230/history/moon.pdf [Broken]
 
Last edited by a moderator:
  • #10
DrChinese said:
Or to put it another way: the linear idea is not a stringent requirement. More anecdotal than anything. It is absolutely not part of the formalism. So objecting to that is not an undercut to Bell.
With entangled photons linearly polarized, Bell predicted that for all lhv theories the decrease in
coincidence rate must be linear or less:
a = a1-a2= difference in polarization direction.
R0 = aligned perfectly. Coincident rate , D1 = R0-R(a)
And D2 =R0-R(2a) Bell required D2≤ 2(D1)
QM predicts D1 = R0[1-cos2(a)]
D2=R0[1-cos2(2a)]
So while linearity is not an assumption it is an expectation , when not met , is given to
demonstrate inequality violation as in case where a = 20o
and D2 is about 4x D1
 
Last edited:
  • #11
So you will see that your example cannot be both realistic and match QM. Again, try to generate triplets at A=0/B=120/C=240 degrees and you will see it cannot be done. And don't forget you need perfect correlations too! So AB=BC=AC=.25 (or .75 depending on whether you are modeling Type I or Type II PDC). The best you will be able to do is AB=BC=AC=.333 (or .666).

Again I'll have you look at the attached image. Continuing my model of a sphere having any possible orientation when entering a measuring apparatus. M1 and M2 are the points of measurement, 120° apart. θ can be at any value, and given a value for θ, the probability of M2 matching is equal to the proportion of the circumference drawn by the yellow line (and shown as a cutaway to the left) which is of a color matching M1, namely Pr = 2λD/2∏D = λ/∏. So to find the total probability of correlation you would sum up this way for all values of θ and divide by n. Given my diagram: λ = aCos(tan(∏-θ)/√3). This works out to an integral which is a bit beyond me... But I can at least do an approximation by taking a handful of values for θ and averaging them. Taking increments of ∏/12, I summed Pr from θ = ∏/2 to 17∏/12, and divide by n... I came out with about .23275.

so... what am I missing? The key difference between what I'm suggesting and the example you use is that given the spherical geometry, the proportion of possible correlations to anti-correlations varies nonlinearly as θ. This doesn't match QM exactly, but that isn't my point. If using a spherical interpretation can generate an unexpected prediction, then how (and please be thorough) can we be sure that no one would ever discover an lhv model which matches observation?
 

Attachments

  • nonlinear2.jpg
    nonlinear2.jpg
    12.7 KB · Views: 429
  • #12
spenserf said:
so... what am I missing? The key difference between what I'm suggesting and the example you use is that given the spherical geometry, the proportion of possible correlations to anti-correlations varies nonlinearly as θ. This doesn't match QM exactly, but that isn't my point. If using a spherical interpretation can generate an unexpected prediction, then how (and please be thorough) can we be sure no one would ever discover an lhv model which matches observation?

I have to ask... Have you read bell's original paper? It's online at http://www.drchinese.com/David/Bell_Compact.pdf.

All this discussion of whether the correlation is linear or not is besides the point, as Bell's argument is not "predicated on local hidden variables producing a linear correlation in measurement with regard to the measuring angle". Indeed, in the discussion leading up to equation #7 in the paper, Bell shows how a local hidden variable model can lead to a cosθ relationship between measurements; and with your ball you've constructed another lhv model with non-linear correlations.

Instead, the flow of the argument is roughly as follows:
A) If we assume the truth of equation #2 in the paper, mathematical logic leads us to Bell's inequality.
B) Quantum mechanics, under some circumstances, makes predictions that disagree with Bell's inequality.
C) Therefore, if QM is correct, then the assumption behind equation #2 in the paper must be false.

Now that experiment has verified the quantum mechanical prediction, we pretty much have to admit that equation #2 in the paper is not correct. Thus, LHV theories have only two ways forward:
1) Find a flaw in the experimental procedures, in which case maybe equation #2 has not been proven false after all. This leads us into the endless (and increasingly futile) search for "loopholes" in the experiments.
2) Find a theory that doesn't imply the truth of equation #2, but that would still be generally considered a realistic LHV theory (an informal definition would be "would have satisfied Einstein, Podolsky, and Rosen"). This is a very tough challenge, as equation #2 is pretty much a restatement of what an EPR local realistic theory would be.

(There is room for endless discussion about exactly which classes of theories are excluded if Bell's equation #2 does not hold in general... and I have no doubt that this thread in general, and this post in particular, will provoke more of this discussion... But that does not change the fact that a broad class of theories are excluded, including all those that would have been generally accepted as LHV in the pre-Bell world).
 
  • #13
spenserf said:
Again I'll have you look at the attached image. Continuing my model of a sphere having any possible orientation when entering a measuring apparatus. M1 and M2 are the points of measurement, 120° apart. θ can be at any value, and given a value for θ, the probability of M2 matching is equal to the proportion of the circumference drawn by the yellow line (and shown as a cutaway to the left) which is of a color matching M1, namely Pr = 2λD/2∏D = λ/∏. So to find the total probability of correlation you would sum up this way for all values of θ and divide by n. Given my diagram: λ = aCos(tan(∏-θ)/√3). This works out to an integral which is a bit beyond me... But I can at least do an approximation by taking a handful of values for θ and averaging them. Taking increments of ∏/12, I summed Pr from θ = ∏/2 to 17∏/12, and divide by n... I came out with about .23275.

so... what am I missing? The key difference between what I'm suggesting and the example you use is that given the spherical geometry, the proportion of possible correlations to anti-correlations varies nonlinearly as θ. This doesn't match QM exactly, but that isn't my point. If using a spherical interpretation can generate an unexpected prediction, then how (and please be thorough) can we be sure that no one would ever discover an lhv model which matches observation?

And I will again tell you the same thing. You are leaving out realism!

You can hand pick the values yourself, and then build a model to suit. But it won't make any difference because the values will still not be able to match an observer independent model. Clearly, you cannot have an expectation of .23275 for the angle settings 0/120/240 degrees. If you have 0/120 match that percentage, and 120/240 match that percentage, the likelihood of 0/240 matching will be something like 50%. Oops, that means it is an observer dependent result because it is dependent on which pair is actually being measured.

To help you see this, and I will use the QM value of .25 instead of your value of .23275 to make it clear:


0 120 240
+ - +
- - +
+ - -
- + -

0/120 matches 25%, 120/240 matches 25%, so all good. But when that happens, you end up with 50% for 0/240 matches! That means it is observer dependent and therefore no realistic. Once you adjust your model to be observer independent, your best ratio will be .333.

Now I know what you are thinking, "my model IS observer dependent". Well, sorry, no, it isn't as I am showing you. The test I have given above is standard. If it is realistic, you must be able to produce values for any angle setting independent of which other is selected.

By the way, there have been a number of attempts to exploit ideas like yours to produce a local realistic model (of course they all are failures). See for example Caroline Thompson's "Chaotic Ball" model.
 
  • #14
Nugatory said:
Now that experiment has verified the quantum mechanical prediction, we pretty much have to admit that equation #2 in the paper is not correct...
Yes, but does the falsity of equation # 2 imply non-locality or non-realism? Bell himself felt it was non-locality.
 
  • #15
bohm2 said:
Yes, but does the falsity of equation # 2 imply non-locality or non-realism? Bell himself felt it was non-locality.

That's the follow-on discussion that I was thinking about when I wrote the parenthesized afterthought:
(There is room for endless discussion about exactly which classes of theories are excluded if Bell's equation #2 does not hold in general... and I have no doubt that this thread in general, and this post in particular, will provoke more of this discussion... But that does not change the fact that a broad class of theories are excluded, including all those that would have been generally accepted as LHV in the pre-Bell world).

You're asking an important question, but it's one that can't even be asked until after we've accepted that we can't have locality AND realism, so that Bell+experiment have refuted the EPR argument that QM is incomplete by saying "It's not going to be completed by the discovery of underlying local realism - get over it!".

I believe that most Bell-questioners haven't made through that stage yet. They're not ready for a discussion of whether to give up locality or realism because they haven't yet accepted that we can't have both.
 
Last edited:
  • #16
Nugatory said:
That's the follow-on discussion that I was thinking about when I wrote the parenthesized afterthought:

You're asking an important question, but it's one that can't even be asked until after we've accepted that we can't have locality AND realism, so that Bell+experiment have refuted the EPR argument that QM is incomplete by saying "It's not going to be completed by the discovery of underlying local realism - get over it!".

I believe that most Bell-questioners haven't made through that stage yet. They're not ready for a discussion of whether to give up locality or realism because they haven't yet accepted that we can't have both.
Thanks, I should read a bit more carefully :redface:
 
  • #17
I believe that most Bell-questioners haven't made through that stage yet. They're not ready for a discussion of whether to give up locality or realism because they haven't yet accepted that we can't have both.

I'd have to agree. But this is the purpose of this thread: I'm trying to come to terms with a claim I've heard repeated, and is obviously a difficult one to just follow along with without fully wrestling with it.

That said, thanks to everyone for your responses.

One difficulty here may be that theoretical physics has advanced to such a point that it truly is natural philosophy in the classical sense. And along with any philosophy, we easily get tied up in terminology.

http://arxiv.org/pdf/quant-ph/0607057v2.pdf

I'm also curious if a one-electron type theory, perhaps expanded to other particles (one photon, one quark, etc) could function as a non-local realist explanation to bell violation.
 
  • #18
spenserf said:

Well, Travis is a die hard* Bohmian. So take his paper with a grain of salt. His point of view is one, there are at least as many papers written than should be titled "Against Locality" (keeping in mind that his paper's title of 'Against Realism' is quite a misnomer).

*And I mean in the sense that he thinks he is right and everyone else is wrong.
 
  • #19
spenserf said:
... perhaps the central question was missed. The theorem is predicated on local hidden variables producing a linear correlation in measurement with regard to the measuring angle. Why is this so? I understand quite well that QM predicts a variable/nonlinear correlation and that observation verifies it. I understand that the proofs are thorough regarding the ways in which linear correlation =/= the types of correlations in measurements we see in QM. But specifically, why can there never be a LHV model which produces nonlinear correlations with regard to measurement angle?
It might be that Bell inequalities are not always predicated on a linear correlation. I don't know. Maybe DrChinese or one of the other science advisors can answer that question.

I do think that it's correct, as DrChinese mentioned, that not all LHV models of quantum entanglement predict a linear correlation. A linear correlation is the maximum correlation that can be gotten with a hidden variable model constrained by Bell's locality condition. There's also a minimum bound to the correlation predicted by these types of models. The QM predictions lie, mostly, outside these LHV boundaries. The LHV linear correlation produces, as far as I know, the most data points in common with QM. So, if all Bell inequalities are predicated on a linear correlation, then maybe that's why.

You can see for youself that the prototype Bell LHV model, wherein the probability distribution for λ is uniform (all λ values are equally likely), will predict a linear correlation.

C(a,b) = ∫ ρ(λ) A(a,λ) B(b,λ) dλ
C(a,b) = ½∏ ∫ sign[cos2(a-λ)] sign[cos2(b-λ)] dλ
= (½∏) (2∏ - 8|a-b|)
= 1 - 4θ/∏ (where θ = |a-b|)

For θ = 0 the correlation coefficient is 1, meaning either both photons are passed by their polarizers or both are absorbed.
For θ = ∏/2 the correlation coefficient is -1, meaning if one photon is transmitted by its polarizer the other must be absorbed and vice versa.
For θ = ∏/4 the correlation coefficient is 0, meaning there's no correlation.

These three values are the only values (for θ) which match the QM prediction. For any other values of θ this LHV correlation function doesn't match QM predictions or experimental results.

It's fairly easy to see that Bell inequalities must be violated by QM. The difficulty most people seem to have is identifying what, precisely, it is about Bell LHVs that causes them to be incompatible with QM. It is, as Bell himself has told us, Bell's locality condition.

Even Bell's initial expression of realism in the functions (Bell's equations (1) in his paper, "On The Einstein Podolsky Rosen Paradox") that determine individual detection [A(a,λ) = ±1, B(b,λ) = ±1] is an expression of Bell locality (ie., it's made explicit that the results at A depend only on the value of λ and the setting of a, and the results at B depend only on the value of λ and the setting of b.

Bell's locality condition, as embodied in the form of his equation (2) in the same paper, makes it further explicit that the individual results (already denoted as independent of each other in his equations (1)) are independent of each other in that the resulting form makes explicit the assumption that the result B doesn't depend on the setting of a and the result A doesn't depend on the setting of b.

Bell's equation (2) is an almost perfect denotation of the assumption of the principle of local action. Almost, because it also encodes the assumption that the measurement outcomes, A and B, are statistically independent, which they aren't. They're statistically dependent in that prior to a detection at either end during a coincidence interval the sample space, ρ(λ), for both ends is a uniform distribution across the entire range of possible λ values. On detection at one end the sample space at the other end is changed. Which is the definition of statistical dependence.
 
  • #20
I recently came across a super-simple proof the CHSH form of the Bell inequality,

-2 ≤ C(a,b) - C(a,d) + C(c,b) + C(c,d) ≤ +2

where C(a,b) = ∫ dλ ρ(λ) A(a,λ)B(b,λ) .

Here is the proof. From the definition of C(a,b), we have

C(a,b) - C(a,d) + C(c,b) + C(c,d)
= ∫ dλ ρ(λ) [ A(a,λ)B(b,λ) - A(a,λ)B(d,λ) + A(a,λ)B(b,λ) + A(c,λ)B(b,λ) + A(c,λ)B(d,λ) ] .

In general, A(a,λ) = ±1 and B(b,λ) = ±1. Thus, each of the four terms inside the square brackets is itself ±1 (for any value of λ). Furthermore, the product of these four terms (including the explicit minus sign as part of the 2nd term) is simply -1, because each factor of A(a,λ), etc, appears exactly twice in this product.

Thus, the four terms must consist of either three +1's and one -1, or three -1's and one +1.

Thus the sum of the four terms must be either +2 or -2.

Hence, the integral is computing the weighted average of a set of +2's and -2's. This weighted average must then be in the range from -2 to +2 (inclusive).

Hence, -2 ≤ C(a,b) - C(a,d) + C(c,b) + C(c,d) ≤ +2, which is the CHSH inequality.
 
Last edited:
  • #21
Just reading over the comments. Considering everything that's been said, it looks to me like spenserf should have the answers to his questions. I had skimmed over a couple of Nurgatory's comments, but he covered something that I had been unsure about along with spenserf. The nice simple proof in Avodyne's post #20 helps a lot also. Thanks.
 

What is Bell's theorem?

Bell's theorem is a fundamental principle in quantum mechanics that states that certain predictions of quantum mechanics cannot be reproduced by any classical theory. It is a mathematical proof that demonstrates the inherent non-locality of quantum mechanics.

Why is Bell's theorem important?

Bell's theorem has important implications for our understanding of the fundamental nature of reality and has been a subject of much debate and research in the scientific community. It challenges our classical intuitions and has led to the development of various interpretations of quantum mechanics.

What does it mean for Bell's theorem to be "near-sighted"?

"Near-sighted" in this context refers to a hypothetical situation in which Bell's theorem would be violated at shorter distances, but not at larger distances. This possibility has been explored in various experiments, but it has not been conclusively proven to be the case.

What evidence supports Bell's theorem?

Bell's theorem has been tested and confirmed in numerous experiments, including the famous Bell test experiments of 1964 and 2015. These experiments have consistently shown that the predictions of quantum mechanics cannot be reproduced by any classical theory, providing strong evidence in support of Bell's theorem.

What are the practical applications of Bell's theorem?

While Bell's theorem has important implications for our understanding of the fundamental nature of reality, it does not have any direct practical applications. However, it has played a crucial role in the development of technologies such as quantum cryptography and quantum computing, which rely on the principles of quantum mechanics.

Similar threads

Replies
80
Views
3K
Replies
50
Views
4K
Replies
55
Views
6K
  • Quantum Physics
Replies
28
Views
1K
Replies
4
Views
954
Replies
49
Views
2K
Replies
75
Views
8K
  • Quantum Physics
7
Replies
220
Views
18K
  • Quantum Physics
2
Replies
47
Views
3K
  • Quantum Physics
Replies
1
Views
701
Back
Top