Comparison between quantum entanglement and a classical version

In summary: But in the case of quantum entanglement, the correlations are still there--they're just due to the fact that the particles are related in a particular way.In summary, quantum entanglement is a profound way to maintain correlations between particles even when they are far apart.
  • #36
Adel Makram said:
I made the following mistake: I used ##\phi=\pi+\alpha## (which is right and I don't know how a writing mistake gave the correct answer later on) instead of ##\phi=\pi-\alpha## and I reached to the correct ##sin^2(\beta-\alpha)##.

##\alpha-\pi##, ##\pi+\alpha##, and "exactly opposite to ##\alpha##" are three different ways of describing the same direction, so of course you get the same result.

If you're facing northeast and you turn ##\pi## radians to the left, or ##\pi## radians to the right, or turn around and look in the opposite direction... you'll be facing southwest.
 
Physics news on Phys.org
  • #37
Nugatory said:
##\alpha-\pi##, ##\pi+\alpha##, and "exactly opposite to ##\alpha##" are three different ways of describing the same direction, so of course you get the same result.

If you're facing northeast and you turn ##\pi## radians to the left, or ##\pi## radians to the right, or turn around and look in the opposite direction... you'll be facing southwest.
No, my mistake was, instead of writing ##cos^2(\frac{\beta - \phi}{2})##=##cos^2(\frac{\beta - (\pi-\alpha)}{2})##=##cos^2(\frac{(\beta +\alpha)}{2}-\frac{\pi}{2})##, I wrote it as ##cos^2(\frac{(\beta -\alpha)}{2}-\frac{\pi}{2})## which led to the correct result.
 
  • #38
stevendaryl said:
Bell's theorem is all about the testable difference between quantum entanglement and any analogous classical correlation.

What Bell showed is that the correlations predicted by quantum mechanics cannot be explained by any such hidden-variable model (unless we allow faster-than-light influences, or back-in-time influences, or some other exotic possibility).
Bell did not actually show how a faster than light influence could reproduce the (very simple) correlation relationships of quantum systems. Nobody has done so, the 'influence' is left a mystery. The faster than light hypothesis is just an ad-hoc supposition that would allow a non-local hidden variable theory (such as de Broglie-Bohm) to reproduce QM. Basically, all the ad-hoc 'influence' is doing is saying that there is a magic effect that causes quantum systems to behave in accordance with observations, a kind of quantum epicycle.

Second, the term 'classical' is very misleading, the Bell inequality applies to any system where bivalent logic values are made from measurement results, the fact that Bell's conclusions about classical systems depend on assuming that classical systems must always provide this type of value is overlooked in the rush to prop-up hi hypothesis about ftl influences. The failure of the type of a hidden variable model where the wave-function is presumed to correspond to a real phenomenon that is supplemented by hidden parameters could also be a result of assuming the wave-function to represent something real (as in de Broglie-Bohm), rather than, as in (QBism) “gambling attitudes for placing bets on measurement outcomes, attitudes that are updated as new data come to light”.

Take a look at Garden. http://www.researchgate.net/publication/226368372_Logic_states_and_quantum_probabilities

Garden develops a formal logic framework consistent with the ‘logical denial’ nature of the information that quantum interactions provide, and shows that the when this type of logic is used, then violations of the Bell inequalities are expected, and the requirement of a break with of locality appears unnecessary. Garden shows that the Bell inequality relies on bivalent truth values for its proof, and that the results of quantum interactions do not allow us to make such an assessment.

By assuming that the so-called 'classical' model returns bivalent assessment, (e.g. A result of 'Up' means 'Not Down') you can prove Bell's theorem. When a result op 'Up' means that the initial state was "not exactly aligned with down." (a denial) then Bell's theorem cannot be applied.
 
Last edited by a moderator:
  • #39
jknm said:
Bell did not actually show how a faster than light influence could reproduce the (very simple) correlation relationships of quantum systems. Nobody has done so, the 'influence' is left a mystery. The faster than light hypothesis is just an ad-hoc supposition that would allow a non-local hidden variable theory (such as de Broglie-Bohm) to reproduce QM. Basically, all the ad-hoc 'influence' is doing is saying that there is a magic effect that causes quantum systems to behave in accordance with observations, a kind of quantum epicycle.

...

Garden develops a formal logic framework consistent with the ‘logical denial’ nature of the information that quantum interactions provide, and shows that the when this type of logic is used, then violations of the Bell inequalities are expected, and the requirement of a break with of locality appears unnecessary. Garden shows that the Bell inequality relies on bivalent truth values for its proof, and that the results of quantum interactions do not allow us to make such an assessment.

By assuming that the so-called 'classical' model returns bivalent assessment, (e.g. A result of 'Up' means 'Not Down') you can prove Bell's theorem. When a result op 'Up' means that the initial state was "not exactly aligned with down." (a denial) then Bell's theorem cannot be applied.

First, locality is an explicit assumption of Bell's Theorem (as is realism). So violation of a Bell inequality implies locality and/or realism is wrong. So no, there is no such ad hoc assumption of non-locality due to Bell. There are those who make such an assumption, but that is different.

Second, the bivalent nature of spin measurements has been discussed in detail. It would make more sense to discuss were it not for the existence of perfect correlations. Their existence more or less undermines that line of thinking completely.
 
  • #40
jknm said:
Bell did not actually show how a faster than light influence could reproduce the (very simple) correlation relationships of quantum systems. Nobody has done so, the 'influence' is left a mystery.
Well, not exactly; constructing a hidden variables model with ftl influences that produces the quantum correlations is utterly trivial.

See e.g. http://rpubs.com/heinera/16727

(Not that I would advise anyone to thnk about QM in terms of hidden variables and ftl communication.)
 
  • Like
Likes Mentz114
  • #41
I see many discussions about ftl and MWI in respect of Bell, not so much on that other way out, so called superdeterminism (SD).
I guess no one likes the idea of loss of freedom.
However, I wonder if SD Is itself a misunderstanding. Let's remember the lack of an absolute frame of reference.
I suggest that, from the photon's POV, there is no time; departure and arrival are perhaps the only real definition of "simultaneous".
Then entangled particles might be an unusual(?) circumstance where three events are simultaneous: the supposed source of the entangled pair, and the two destinations. From our experince of 'time', perhaps that looks like predeterminism, but perhaps our experience is simply too limited.
I guess that puts me in the Realist camp. Kinda.
 
Last edited:
  • #42
Headcrash said:
I suggest that, from the photon's POV, there is no time; departure and arrival are perhaps the only real definition of "simultaneous".
Even setting aside the well-known problems that arise from any attempt to analyze a physical situation "from the photon's POV", we can do Bell-type experiments with spin-entangled particles that have non-zero rest mass so do not travel at the speed of light.
 
  • #43
Headcrash said:
I see many discussions about ftl and MWI in respect of Bell, not so much on that other way out, so called superdeterminism (SD).
I guess no one likes the idea of loss of freedom.

Welcome to PhysicsForums, Headcrash!

Superdeterminism (SD) and determinism are completely different. Determinism in physics is a feature of some interpretations, including Bohmian Mechanics (BM). BM is an actual theory, which reproduces the predictions of quantum mechanics. It is one of several generally accepted interpretations of Quantum Mechanics. There are plenty of advocates of BM on this forum, and I seriously doubt those who don't follow BM are worried about loss of freedom.

There is currently no theory called SD, although perhaps someone will advance one some day. It is actually the "hypothesis" that the laws of physics are local realistic; but that experiments always yield values that indicate that the laws of physics are not local realistic. It is just a fabricated concept by persons who don't like non-realism or non-locality.

There is also a hypothesis that the universe is 5 minutes old (see for example the quote by the famous philosopher Bertrand Russell). Neither this nor the SD hypothesis can ever been proven false as stated. Because it is not falsifiable, most scientists consider this metaphysics and not physics proper.
 
  • #44
Nugatory said:
... we can do Bell-type experiments with spin-entangled particles that have non-zero rest mass so do not travel at the speed of light.
Oops, evidence trumps metaphysics! Thanks for that.
(And, now I recall reports of distinctly non-elementary particles (buckyballs) showing quantum-like behaviour in twin slit experiments.
Back to the drawing board for me. )

Even if it were sufficient, I'm not familiar with the "well known problems" of considering things from photons POV. Maybe I'll start here: https://www.physicsforums.com/threads/the-photons-perspective-taboo.315122/
 
  • #45
I don't understand the graph of the classical correction where the correlation at angle (45°)=-1/2 and similarly at (135°, +1/2), (225°,+1/2) and ( 315°, -1/2).
Following post#9 and suppose α=0 and β=45, then the SB.β=cos(π+45)=-cos 45° which is less than 0. So, Bob should report -B indicative of the direction of the spin of his particle is opposite that of Alice. So the correlation should be -1 not -1/2.
The graph then should be a square wave rather than a triangle wave.
 

Attachments

  • correlation.png
    correlation.png
    4.7 KB · Views: 375
  • #46
Nugatory said:
You only need values for ##\alpha## and ##\beta## because if you have these you can calculate ##\phi##.

Alice can choose any angle she wants for ##\alpha## and Bob can choose any angle he wants for ##\beta## (we could even replace both Alice and Bob with computers running random number generators). The point of the exercise is that no matter what values are chosen for ##\alpha## and ##\beta##, the ##\cos^2## and ##\sin^2## formulas will give you the probability of them both getting or not getting the same result on their measurements.

In case of both have the same angles, β-α=0, the probability of getting the same result =0 because ##cos^2(\frac{(\beta-\phi)}{2})##=##sin^2(\frac{(\beta-\alpha)}{2})##
 
  • #47
DrChinese said:
First, locality is an explicit assumption of Bell's Theorem (as is realism). So violation of a Bell inequality implies locality and/or realism is wrong. So no, there is no such ad hoc assumption of non-locality due to Bell. There are those who make such an assumption, but that is different.

Second, the bivalent nature of spin measurements has been discussed in detail. It would make more sense to discuss were it not for the existence of perfect correlations. Their existence more or less undermines that line of thinking completely.
I don't think you can dismiss the non-bivalence of spin measurements by simply alluding to 'discussion'. If they are not bivalent, then Bell's theorem fails and cannot be applied.

The fact that Bell's theorem requires locality, does not automatically mean that the only answer is non-locality, when Bell's theorem fails for other reasons.

Garden has mathematically demonstrated that the proof of Bell's theorem depends on treating spin measurements as bivalent. The classical models considered by Bell employ bivalent measurement models, so the fact that Bell's theorem applies to them is no surprise.

There is no explicit demonstration, anywhere AFAIK, as to how non-locality can be 'put in 'to a model to reproduce the results of the experiments.
In that sense, the only logical conclusion is that non-locality is nothing more than a spurious ad-hoc assumption concocted to 'explain' why quantum systems break Bell inequalities.

All it takes is for the logic assessments that we attribute to spin measurements to be a logical denial, not a bivalent assessment, and then whether or not Bell's theorem requires locality becomes irrelevant, because the theorem does not apply to the system.
 
  • #48
jknm said:
I don't think you can dismiss the non-bivalence of spin measurements by simply alluding to 'discussion'. If they are not bivalent, then Bell's theorem fails and cannot be applied
I don't know that I'd say the theorem "fails", as the theorem claims to and does preclude a large class of theories: informally, we say that it precludes all "local realistic hidden variable theories". This bivalence property, along with counterfactual definiteness, is part of what the informal speakers mean when they informally say "realistic", so the fact that rejecting bivalence allows the inequalities to be violated is consistent with Bell's theorem.

Thus (unless I'm misunderstanding your argument) you have successfully demolished the straw man claim that Bell's theorem requires rejecting locality but left the claim that is actually being made, namely that Bell's theorem requires rejecting at least one of locality and the complex of properties that we informally call "realism", untouched.

There is no explicit demonstration, anywhere AFAIK, as to how non-locality can be 'put in 'to a model to reproduce the results of the experiments
I've seen many. For a trivial example, consider the hypothesis that when Alice makes her measurement, a superluminal pixie is created, and this pixie travels to Bob's in-flight particle and twists its spin to point opposite to whatever Alice measured. I've deliberately chosen this example to be absurd, but it is consistent with the experimental results, and it gets that way by being explicitly non-local - experiments have shown that normal subluminal pixies won't work when the detection events are spacelike separated.
 
Last edited:
  • #49
jknm said:
I don't think you can dismiss the non-bivalence of spin measurements by simply alluding to 'discussion'. If they are not bivalent, then Bell's theorem fails and cannot be applied.

The fact that Bell's theorem requires locality, does not automatically mean that the only answer is non-locality, when Bell's theorem fails for other reasons.

Garden has mathematically demonstrated that the proof of Bell's theorem depends on treating spin measurements as bivalent. The classical models considered by Bell employ bivalent measurement models, so the fact that Bell's theorem applies to them is no surprise.

There is no explicit demonstration, anywhere AFAIK, as to how non-locality can be 'put in 'to a model to reproduce the results of the experiments.
In that sense, the only logical conclusion is that non-locality is nothing more than a spurious ad-hoc assumption concocted to 'explain' why quantum systems break Bell inequalities.

All it takes is for the logic assessments that we attribute to spin measurements to be a logical denial, not a bivalent assessment, and then whether or not Bell's theorem requires locality becomes irrelevant, because the theorem does not apply to the system.
I would say that Bell's theorem is about (the possible mathematical-physical explanation of) *experiments* in which the experimenter observes a binary outcome. Quantum theory is usually taken to include a "measurement" part saying that we observe eigenvalues of observables, with certain probabilities. If you discard that part of QM then you can forget about Bell's theorem. But then you are disconnecting QM from the real world (it no longer makes experimental predictions).
 
  • #50
gill1109 said:
I would say that Bell's theorem is about (the possible mathematical-physical explanation of) *experiments* in which the experimenter observes a binary outcome. Quantum theory is usually taken to include a "measurement" part saying that we observe eigenvalues of observables, with certain probabilities. If you discard that part of QM then you can forget about Bell's theorem. But then you are disconnecting QM from the real world (it no longer makes experimental predictions).

It's hard to say what Rachel Wallace Garden's objection to Bell is really about without reading her paper, and I don't see a free, online copy of her paper. But by "rejecting bivalence" she might mean rejecting classical two-valued logic. In that case, I would lump her ideas into the general category of quantum logic, which in my opinion isn't a resolution to quantum weirdness, but is just a way of describing that weirdness.
 
  • #51
https://en.wikipedia.org/wiki/Bell's_theorem
From wikipedia: "With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with a linear dependence of the correlation in the angle but, according to Bell's inequality could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the negative cosine of the angle."
Let`s filtering data from the experiment where Alice particle is spin-up(+) and Bob particle is down (-) relative to +z-axis where Alice aligns here detector. Up to here there is no difference between the classical theory and the quantum theory. The difference of the correlation according to the Wikipedia comes from the measurement where classically, the correlation is linear with the angle of Bob detector but quantum non-locally, the correlation is a function of the negative cosine of the angle.
 
Last edited:
  • #52


In this video a nice demonstration of QE and how hidden variables theory yields different results in 5/9 of randomly chosen detectors direction ( 3 directions in this video) while QM predicts only 50% different outcomes.

But suppose there exists plans of spin-pairs along all possible angles which can be represented by f(θ) provided that when Alice measures spin-up along one direction, Bob measures spin-up along the opposite direction. f(θ) will be then a hidden variable and if it extends to all θ (from 0 to 2π), then measuring spin along randomly chosen directions in both locations will give different outcomes in only 50% provided that f(θ) is also a random function ( white noise). So a hidden variable of white noise replicates the Quantum Theory prediction.!
 
Last edited:
  • #53
Adel Makram said:
But suppose there exists plans of spin-pairs along all possible angles which can be represented by f(θ) provided that when Alice measures spin-up along one direction, Bob measures spin-up along the opposite direction. f(θ) will be then a hidden variable and if it extends to all θ (from 0 to 2π), then measuring spin along randomly chosen directions in both locations will give different outcomes in only 50% provided that f(θ) is also a random function ( white noise). So a hidden variable of white noise replicates the Quantum Theory prediction.!

That's exactly what Bell's theorem proves is impossible. Letting [itex]f(\theta)[/itex] be random instead of deterministic sounds like it is more general, but it actually isn't. The exact same inequalities apply, and those inequalities are violated by QM.
 
  • #54
I will go through the proof of Bell`s theorem.

But one question, is the difference in the correlation between both theories ( QM and local hidden variables) due to how the experimenter calculate the probability of measuring the spin along a given direction? (Wikipedia says that the correlation is linear with the angle in the classical theory and a function of the negative of the cosine in QT).

In other words, is the difference in the correlation due to the ways different measurements probabilities are calculated or due to superposition states of the spins?

Will the superposition of states can be virtually eliminated by filtering the data where the Alice spin (+)?. For if the correlation is still a function of the negative cosine of the angle despite that the superposition is now eliminated, then the only way to explain it is by how measuring the spin is calculated.
 
Last edited:
  • #55
Adel Makram said:
I will go through the proof of Bell`s theorem.

But one question, is the difference in the correlation between both theories ( QM and local hidden variables) due to how the experimenter calculate the probability of measuring the spin along a given direction? (Wikipedia says that the correlation is linear with the angle in the classical theory and a function of the negative of the cosine in QT).

In other words, is the difference in the correlation due to the ways different measurements probabilities are calculated or due to superposition states of the spins?

Will the superposition of states can be virtually eliminated by filtering the data where the Alice spin (+)?. For if the correlation is still a function of the negative cosine of the angle despite that the superposition is now eliminated, then the only way to explain it is by how measuring the spin is calculated.
The point of Bell's inequality is that it talks about correlations which are directly observable. Alice and Bob each measure a spin in some direction alpha, beta, and get to see an outcome +/-1. They repeat this a number of times. The experimentally observed correlation is the average of the product of the outcomes. So it is equal to the number of times that Alice and Bob's outcomes were equal, minus the number of times it was unequal, divided by the total number of repetitions. No "theory" is being used to calculate measurement probabilities. It is all about relative frequencies in many repetitions of the same experiment.
 
  • #56
gill1109 said:
The point of Bell's inequality is that it talks about correlations which are directly observable. Alice and Bob each measure a spin in some direction alpha, beta, and get to see an outcome +/-1. ...
The point is that they don't actually measure an orientation, they observe the results of an interaction and make the mistake of assuming it is a measurement of orientation. The information available from the result is a denial, not a bivalent determination. All you can say of a photon emerging from the A channel of an analyzer, is that the initial state was not a polarization state exactly aligned with B - which is a denial, all orientations that are not exactly B are possible, e.g. circular, elliptical, and linear in any direction but B.

Garden's paper presents a mathematical analysis that shows the when the information is a denial then violations of Bell's inequality are expected.
From what I can see, she actually proves that when the information obtained is a denial, then Bell's theorem does not apply.

Second, the examples presented so far begin by assuming that the information obtained from an interaction is a bivalent value, and thus ensure that Bell's theorem applies, thereby going into a circular argument. By constraining the type of 'classical ' interaction to one where the result is that the instrument samples a value, like an analog to digital converter, one constrains the problem to comply with Bell's theorem and thus make sure the a result consistent with a preconceived notion about the nature of the problem.

If you assume that evaluations are bivalent (i.e. A means NOT B) only then, do you need to create some ad-hoc non-local effect, in order to make a deterministic system reproduce the results of QM. The conclusion that non-locality is required for systems to reproduce the results of quantum mechanics is nothing more than an ad-hoc assumption to rescue the initial assumptions about the nature of the information obtained from a 'classical' interaction.
 
  • #57
Nugatory said:
I don't know that I'd say the theorem "fails", as the theorem claims to and does preclude a large class of theories: informally, we say that it precludes all "local realistic hidden variable theories". This bivalence property, along with counterfactual definiteness, is part of what the informal speakers mean when they informally say "realistic", so the fact that rejecting bivalence allows the inequalities to be violated is consistent with Bell's theorem.

Thus (unless I'm misunderstanding your argument) you have successfully demolished the straw man claim that Bell's theorem requires rejecting locality but left the claim that is actually being made, namely that Bell's theorem requires rejecting at least one of locality and the complex of properties that we informally call "realism", untouched.I've seen many. For a trivial example, consider the hypothesis that when Alice makes her measurement, a superluminal pixie is created, and this pixie travels to Bob's in-flight particle and twists its spin to point opposite to whatever Alice measured. I've deliberately chosen this example to be absurd, but it is consistent with the experimental results, and it gets that way by being explicitly non-local - experiments have shown that normal subluminal pixies won't work when the detection events are spacelike separated.
All your pixie is, is an ad-hoc mechanism, an unfalsifiable 'epicycle', not actually consistent with either classical or quantum theory. Which proves my point, the only device you can think of is some contrivance, nobody has come up with something any more explicit. To claim that the magic pixie, and other contrivances, is explicit is to do nothing more concocting a device to override the action of supposed hidden variables, and magically make the system do whatever QM does, 'magically' doesn't cut it as explicit science.

My point is that you are forced to create ad-hoc mechanisms if Bell's theorem is assumed to apply universally to all possible classes of interaction model. However when you examine Bell's theorem you find it depends on making assumptions as to the nature of the information that is obtained from observing an interaction - that the information is a bivalent determination. The paper I quoted (Garden) shows that when the information available is a denial (which is consistent with the problem) then Bell's theorem fails to apply.
Experiments confirm that QM's predictions are correct. The ad-hoc mechanisms would only rescue the type of 'classical' models where a measurement results in a bivalent determination - the conclusion you can reach from Bell's theorem is that bivalent assessments from interactions are ruled out.

Rachel Garden's paper is anything but informal, it explicitly proves that the assumption of bivalence is necessary to prove Bell's theorem. The real "straw man"
is the insistence of bivalence, without it you cannot prove Bell's theorem. Bivalence is a necessary condition and supposition about magic pixies is not.
 
  • #58
jknm said:
Rachel Garden's paper is anything but informal, it explicitly proves that the assumption of bivalence is necessary to prove Bell's theorem. The real "straw man" is the insistence of bivalence, without it you cannot prove Bell's theorem. Bivalence is a necessary condition and supposition about magic pixies is not.

I can't make a judgment about Garden's paper, since I can't read it, but your description of it sounds completely wrong.
 
  • #59
jknm said:
explicitly proves that the assumption of bivalence is necessary to prove Bell's theorem.
Of course it is, but who is arguing otherwise? Bell's theorem can be succinctly stated as "If you make certain assumptions, then a particular inequality will hold". The theorem is important and interesting because experiments have shown that this inequality is violated, so therefore we can reasonably conclude that one or more of the assumptions is false.

Those assumptions include locality (in the sense that the results of a measurement cannot be influenced by events outside of the past light cone of the measurement event), counterfactual definiteness, and the property we're calling "bivalence" in this thread. So there you have it: At least one of the assumptions of locality, counterfactual definiteness, and "bivalence" does not match the way the universe works. There is room for interesting discussions about which of these assumptions are to be rejected, but you have to accept the correctness of the theorem before you have a logical basis for rejecting any of them.

(Conversely, if you want to show that Bell's theorem has "failed" you would prove, presumably by example, the existence of something that the theorem says is impossible: a theory that is local and "bivalent" and counterfactually definite. Garden's paper , as you're describing it, doesn't do that; instead it shows that rejecting bivalence is one way of reconciling experimental results with the correctness of Bell's theorem).
 
  • Like
Likes Mentz114
  • #60
jknm said:
The point is that they don't actually measure an orientation, they observe the results of an interaction and make the mistake of assuming it is a measurement of orientation. The information available from the result is a denial, not a bivalent determination. All you can say of a photon emerging from the A channel of an analyzer, is that the initial state was not a polarization state exactly aligned with B - which is a denial, all orientations that are not exactly B are possible, e.g. circular, elliptical, and linear in any direction but B.
Alice and Bob each toss a fair coin and set a setting on a measurement device to one of two possible values. Something happens inside a black box, and out comes a binary outcome which we conventionally encode as +/-1. Later we correlate the four streams of binary values: setting Alice, setting Bob, outcome Alice, outcome Bob.

There is no assumption about any orientations at all. There is no assumption about polarization.

In the recent experiment http://arxiv.org/abs/1508.05949 the black box contains a Nitrogen-Vacancy imperfection in a diamond ...

By the way anyone who would like to see the paper by Rachel Wallace Garden, please find my email address on internet and send me a regular email.

Please notice: the bivalence is *enforced* by the experimental design (if we are talking about a loophole-free CHSH-Bell type experiment). It is a macroscopic feature of the laboratory arrangement. It is not an optional assumption about the underlying physics.
 
  • #61
gill1109 said:
No "theory" is being used to calculate measurement probabilities. It is all about relative frequencies in many repetitions of the same experiment.

But in order to theoretically explain the correlation between different outcomes, a theorem must exist about how the measurement probability is calculated.

post #16
gill1109 said:
Later, when Bob measures the spin of his particle at direction [itex]\beta[/itex], he gets +1 with probability [itex]cos^2(\frac{\beta - \phi}{2})[/itex] and -1 with probability [itex]sin^2(\frac{\beta - \phi}{2})[/itex].
QT allows Bob to measure his particle in (+) direction with with probability [itex]cos^2(\frac{\beta - \phi}{2})[/itex]. But I don't know how Bob would measure it according to the classical hidden variables theory which is for me a decisive point in understanding Bell`s theorem.
 
  • #62
Adel Makram said:
QT allows Bob to measure his particle in (+) direction with with probabilit). But I don't know how Bob would measure it according to the classical hidden variables theory which is for me a decisive point in understanding Bell`s theorem.

Alice sends her particle through her Stern-Gerlach device, Bob sends his particle through his Stern-Gerlach device, and they each write down the direction in which the particle was deflected. That's the measurement part, and it's not done according to any theory - we're just gathering data, to see if it matches the predictions made by the various theories. After they've done a large number of pairs, they get together and compare notes, see whether the correlations in their two lists of measurements violate Bell's inequality.
(This is also the first time that the ##\sin^2\frac{\alpha-\beta}{2}## rule will appear - they didn't need it to make their measurements).

If the inequality is violated, then the experiment we've just done tells us that we can reject any local realistic hidden variable theorem, because Bell's theorem shows that no local realistic hidden variable theory can produce results that violate the inequality.
 
  • Like
Likes Adel Makram
  • #63
I mean how to derive the dependence of correlations ( quantum and classical) on the angle between the detectors.
 
  • #64
Adel Makram said:
I mean how to derive the dependence of correlations ( quantum and classical) on the angle between the detectors.
According to classical theory, many different correlation functions are possible. According to quantum theory, many *more* correlation functions are possible. Much more. Quantum theory is richer than classical theory.

I wrote a short paper exploring what are the possible correlation functions according to classical theory: http://arxiv.org/abs/1312.6403
 
  • Like
Likes Adel Makram
  • #67
There is an unstated principle in Bell's derivation, which is known as Reichenbach's Common Cause Principle. Basically, this says that if two random variables are correlated, then there must be a common cause for each. Mathematically, if you have two contingent events [itex]A[/itex] and [itex]B[/itex], and

[itex]P(A \& B) \neq P(A) \cdot P(B)[/itex]

then there must be some cause [itex]C[/itex] in the common causal past of [itex]A[/itex] and [itex]B[/itex] such that

[itex]P(A \& B | C) = P(A | C) \cdot P(B | C)[/itex]

In other words, if you knew all the facts in the past relevant to [itex]A[/itex] and [itex]B[/itex], then the probabilities would factor. This gives rise to Bell's assumption about hidden variables:

[itex]P(A \& B) = \int P(\lambda)\ d\lambda\ P(A, \lambda) \cdot P(B, \lambda)[/itex]

My understanding is that this principle isn't logically necessary, although it is tacitly assumed in almost all reasoning about cause and effect.
 
  • Like
Likes Adel Makram
  • #68
Adel Makram said:
I mean how to derive the dependence of correlations ( quantum and classical) on the angle between the detectors.

You'll find the calculation of the correlation predicted by quantum mechanics in many texts (In fact, it looks like Mentz114 and Heinera posted examples while I was composing this!).

You won't find a calculation of the correlation predicted by "the classical theory" anywhere, because there is no single "the classical theory" in this conversation. What you will find, in Bell's paper, is the proof that any theory that does not allow Bob's result to depend on Alice's choice of direction must predict a correlation that obeys Bell's inequality. The term "classical theory" means any such theory; we don't need any specific example to be able to follow Bell's argument that all such theories must obey the inequality. (This is analogous to the way that I can prove that all right triangles obey the Pythagorean theorem without having to talk about any specific right triangle - it may help me understand if I have some specific examples of right triangles that I can look at to see the Pythagorean theorem in action, but it's not necessary for the proof).
 
  • #69
stevendaryl said:
There is an unstated principle in Bell's derivation, which is known as Reichenbach's Common Cause Principle.
Although it is not explicitly stated in his original derivation, it is also not exactly cunningly and subtly concealed :smile:. He does cover this issue in his later writings.

The history is somewhat relevant here. When Bell first published, the question on the table was whether QM was incomplete in the sense that the EPR authors meant. Thus, Bell's starting point was informally "Let's assume properties that would have satisfied the EPR authors...", and it's clear that the assumptions in his original paper met that requirement. Only after Bell's somewhat shocking result (and the experimental confirmation of inequality violations) was digested did people start putting serious energy into identifying precisely what those assumptions were. At some point along the way, the conversation shifted from the initial conclusion ("Sorry EPR - we know what you want and why you want it, but it doesn't exist") to a more rigorous specification of exactly which classes of theories are precluded by Bell's theorem and the experimental discovery of inequality violations.
 
  • #70
The difference between classical and quantum entanglement is simply that in the quantum case the state of either top is unknown until it is measured whereas in the classical case it is known all along.
 

Similar threads

  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
4
Views
678
  • Quantum Physics
Replies
9
Views
404
  • Quantum Physics
Replies
3
Views
815
  • Quantum Physics
Replies
10
Views
1K
Replies
41
Views
2K
  • Quantum Physics
Replies
5
Views
776
  • Quantum Physics
Replies
5
Views
989
  • Quantum Physics
Replies
22
Views
1K
  • Quantum Physics
Replies
7
Views
1K
Back
Top