# Bell's Inequality

#### Jimster41

Gold Member
Thanks for that great list of resources! I have ordered a couple of the books, and as of this morning I am planning to take one of those Susskind courses.

I'm not against math. I love math. But, I have no talent for it.

The other image I have carried around is of this almost transparent octopus looking Probability wave-function spread between the laser and the slits and poking through both slits hovering right in front of the screen, but not touching it - just yet. Then when one tentacle touches the screen - all of a sudden you can see the octopus - how's that for an understanding! ;-)

Last edited:

#### bhobba

Mentor
I'm not against math. I love math. But, I have no talent for it.
That's not important. What is more important is perseverance. Take your time and don't give up - its not a race. Post here with any queries - that's what this forum is all about. Soon you will have an understanding way beyond popularisations.

Thanks
Bill

#### Elroy

Yes, I'd like to wholeheartedly second bhobba's above comment. For me, it's important that we can develop an (accurate) "conceptual" understanding of these phenomena, possibly with a great deal of use of "everyday" language. Everyday language has the problem in that it's sometimes slippery around the edges, so we do need to be concise with our definitions. However, for me, that's what a great deal of this is about: The development (and acquisition) of the precise language of physics.

Also, I think this approach is important for these concepts to "stick". If they are "purely" abstract, our minds seem to struggle to attach them to the rest of our knowledge. However, this is where a great deal of the struggle comes in. As far as can be understood, all of our "concrete" perceptions are in three spacial dimension and one temporal dimension. As an example, relativity often talks about Minkowski space as a four-dimensional static visual space (including time) to talk about things. It collapses the dynamic dimension of time into a static spacial dimension.

In this depiction, the actual three observable spacial dimensions are only given two dimensions, so the third can be given to time. (Just FYI, the cones are where "causal" events can take place, assuming speed of light limit is obeyed.) To make sense of this depiction, we must "stretch" our minds to recognize that the 2D space is truly attempting to represent the 3D spacial space in which we live. That is why it is labeled a "hypersurface".

Regarding math, we can quite easily represent vectors (or even tensors or spinors) in space (spacial, temporal, or otherwise) of as many dimensions as we like. However, we do often lose the ability to "grasp" the underlying concepts. Furthermore, math can be wrong. In the end, math is yet another (hopefully more accurate and concise) language in which to outline this stuff. As with any other language, it can tell lies. I'm tempted to also include the math-thought-reality tri-image developed by Penrose, but I'll leave it to others to explore that.

p.s. I'm still working on a pictorial way to represent the transition from classical probability theory to the correlations observed with quantum superposition and entanglement (focusing specifically on complete entanglement with EPR pairs, as a first pass). I'm having fun with it.

Last edited:

#### bhobba

Mentor
Relativity is a good example of math giving deep insight.

Check out the following derivation of the Lorentz transformation:
http://www2.physics.umd.edu/~yakovenk/teaching/Lorentz.pdf

It's premises are so plausible and fundamental no-one would care to doubt them. In fact we see the speed of light thing in SR is simply fixing a constant that naturally occurs from other more basic symmetry considerations. Yet it has these startling consequences. Its the power and beauty of math.

Math is not about long boring calculations - it about concepts and their consequences.

Thanks
Bill

Last edited:

#### Jimster41

Gold Member
Nothing in that Lorentz transform derivation that I'm unanle to follow. I am up on that level of math. And I believe it all. It's just not very portable or compact, at least for me, in terms of trying to get to the next part, or having something I can walk down the sidewalk thinking about. Elroy's 3d space-time diagram (that's what they are called right?) on the other hand, I was already pretty comfortable with...

Reading Penrose' Cycles of Time. I feel like I almost get Confirmal Diagrams, but the strict Conformal Diagrams "don't stick" quite yet.

I did read Dr Chinese' page and I'm enjoying Gisin's "Quantum Chance" Maybe where my question about Alice and Bob went wrong... Gisin's uses The Joystick Metaphor rather than the "coin toss". My confusion is with regard to how a decision made far away/earlier by Bob's Joystick seems to me to be determining what's going to happen to Alice. Bob is standing there betting her what's going to happen (he knows) just to represent the discomfort implied (I'm having) from non-local determinism? Once Bob has moved his joystick, Alice's outcome is set, and her joystick doesn't do what she thinks it does anymore - Or does she always have to move her joystick exactly when Bob does - for the metaphor to be consistent with the math? I get that Bob couldn't control the result of his joystick - it's just that whatever he got the instant He jumped the gun is now some non-local thing playing a deterministic role in Alice's present and future? Is it just the fact that the non-local influence causes an outcome that seems just as random, as far as Aluce can tell, as her Joystick would have given anyway? This is why I wanted to have Bob there gloating, just to drive home the idea that what she thought was random, and looks just like the usual random stuff, from Bob's perspective, which represents another place, in the past, is determined.

Well now that I say it like that, what could be less weird. But then it's an influence out of all causal norm. As far as Alice knows (can tell) nothing causal has happened to her joystick to make it deterministic, rather than random

In entanglement experiments, I vaguely recall hearing the phrase "delayed choice" is that all this is? Anyway just trying to grok the concept at some level I can enjoy in daily life - which of course may not be possible...

I realize I have a surprising and possibly ridiculous notion that entanglement is somehow a significant, frequent, dispersed, even ubiquitous thing out there. Which is why Bob's gloating is sort of disturbing. I can imagine this is not at all interesting if Bob and Alice's joysticks are so rare, co-located, and short lived, as to be pretty much irrelevant phenomena to existence.

Last edited:

#### Jimster41

Gold Member
If you want a visual, there is one class that most closely matches experiment. Most people reject these because causality is not respected. These are the time symmetric group of interpretations. In these, you have the following 2 key elements:

a) Both the past and the future are elements of the experimental context. So Alice's setting and Bob's setting are both part of the equation when particle pairs are created, even though they are set in the future.

b) Otherwise, locality (c) is always respected. Despite the "non-local" appearance of entangled pairs (which I don't dispute in any way): in these interpretations, everything is cleanly connected by local action.

One of the advantages of this visual is that it naturally explains entanglement which occurs after detection. Sophisticated experiments allows after-the-fact entanglement (hard to believe but true - you can entangle particles that no longer exist). Such is not natural in many other mechanistic explanations.

I totally need a picture. The above (I hadn't understood this post at all the firsttime I read it) is exactly what I mean I think. What I am amazed and confused by. Strange how the sentence "Both the past and the future are elements of the same experiment" Gives me a connotation rich image I can sort of hold onto to represent the meaning the amazing math has uncovered? I just hope it's sort of correct...

So now I would love to have just a couple of more pieces squared away.

Is there any sense in which entanglement is significant feature of space time evolution as we experience it? Or is it an utterly fleeting and rare exception? Or do we not know?

Does the bizarre non-causal a-temporal sounding statement a) above turn out to be utterly innocuous because the "experiment" always provides random results - leading to a conclusion like "well the future is already set, but the result is random". In which case what's the difference between random and uncertain? I'd guess it's about 1 big beer.

But then random seems unsatisfactory, if entanglement is a phenomenon that is involved in our evolution? this feels almost completely fuzzy, but I get this icicle in brain that's asking "where does all the structure come from"

#### Jimster41

Gold Member
Someone asked if the photon probability wave front hit the slits at the same time. An innocuous question... That as I try to answer it for myself has me pretty much confused.

Position was uncertain
Can't go faster than c
It was entangled with all the stuff around it forward and backward in time and out to freaking infinity but that matters not because baseballs are made of quantum sh--- and we can manage those pretty good

Last edited:

#### bhobba

Mentor
It was entangled with all the stuff around it forward and backward in time and out to freaking infinity but that matters not because baseballs are made of quantum sh--- and we can manage those pretty good
Where are you getting this from?

Here is a much better analysis:
http://arxiv.org/ftp/quant-ph/papers/0703/0703126.pdf

Thanks
Bill

#### Jimster41

Gold Member
Where are you getting this from?

Here is a much better analysis:
http://arxiv.org/ftp/quant-ph/papers/0703/0703126.pdf

Thanks
Bill
Thanks, it looks good. I am pretty weak on the bracket notation, but it's sort of reads intuitively. I understand the setup and the three cases, seems very helpful to break it down that way. I'm going to study it for awhile.

The math defines some pieces of my limit. $e$, $i$ and $\pi$, combinations thereof especially ${ e }^{ -i }$. I have an easier time with Planck's constant of Position and Momentum Relation and Boltzman's constant of Energy Temperature relation. Not that they aren't utterly puzzling. But they don't break my read of equations the way $e$, $i$ and $\pi$ do. I wonder often if you guys really "feel" or "see" those when you read them. I imagine you do but maybe that's not true.

Trying to read this has already been helpful because I remembered this morning how in my DiffEq class and then later in Signals and Systems, we learned how Fourier's theorem shows you can create arbitrary signals by adding together sin waves written in that ${ e }^{ -i }$ notation. Fourier really stuck, even though I was and am still baffled by how you use ${ e }^{ -i }$ to make sin waves. It's the periodicity of -i or something like that and then $\pi$ gets in there with the wack-ness of the very circle itself. I need to refresh my memory on it. So do I understand correctly that those terms represent the points on Schrodinger's Wave Equation, for purposes of integrating over it's point-wise interaction with the... Eigen things...

Eigenstates and Eigenfunctions. I made the grade in Linear Algebra but frankly it was frustrating... because I walked away feeling like it was just dutiful plug and chug, doing the homework, taking good notes. I have almost no 'feel' for what Determinants, Eigenstates and Eigenfunctions mean as operators. They turn matrices into scalars, and ... functions, or vectors of scalar points, or functions of points. I need to drill into that one. It's a real obstacle to reading math.

I can pick it up later with the linearity of the QM operations

So thanks this is helpful. I am still curious at the end of the day about the philosophical implications of this stuff - and I wonder sometimes if maybe you all are saying there really aren't any. Or is it more that it gets weirder the more precise one's understanding?

Sorry for going on and on, and I didn't mean to hijack this thread.

Last edited:

#### bhobba

Mentor
So thanks this is helpful. I am still curious at the end of the day about the philosophical implications of this stuff - and I wonder sometimes if maybe you all are saying there really aren't any. Or is it more that it gets weirder the more precise one's understanding?
There are philosophical implications all right - just not the things you read in some populist accounts eg the overthrow of naive reality by Bells theorem - but not rubbish like conciousness created reality and other mystical twaddle found in trash like What The Bleep Do We Know Anyway.

Thanks
Bill

#### Elroy

Alright, I'm still on the idea of straightforward (and visual) explanations of the violations of Bell's inequalities, and how they illustrate the quantum weirdness. I'm going to lay out some pieces here, and open them up to discussion.

Bell’s inequality is often stated as

ρ(a,c) – ρ(b,a) – ρ(b,c) ≤ 1,

where the ρ (rho) values are correlations between events. However, this inequality can be stated in a somewhat simpler form as a straightforward probability problem. Let’s say we have a coin. (We’ll talk a bit later about whether or not it matters that it’s “fair”.) We’ll let “tails” represent a ZERO (0), and “heads” represent a ONE (1). Furthermore, let’s assume that we’re going to flip it three times. The following table provides all the possible (eight) outcomes:
Code:
Flip#1(a)  Flip#2(b)  Flip#3(c)
0          0           0
0          0           1
0          1           0
0          1           1
1          0           0
1          0           1
1          1           0
1          1           1
We will assume that the event we are interested in is “heads” (a ONE). Let’s identify Flip#1 as event “a”. We will also call Flip#2 event “b”, and Flip #3 event “c”. Furthermore, I’ll use the exclamation point (!) to indicate a “NOT”, such that !b would indicate the NOT b event (or that b didn’t happen, or that we didn’t get heads). I will also use the ∧ symbol to denote a boolean AND. This simply means that both events happened.

Now, I’d like to define three separate possible outcomes for our three flip events:

Outcome#1: a ∧ !b (a happened, b did not happen, and c doesn’t matter)
Outcome#2: b ∧ !c (b happened, c did not happen, and a doesn’t matter)
Outcome#3: a ∧ !c (a happened, c did not happen, and b doesn’t matter)

Let’s go back to the above table of possibilities. We find that the outcomes are identified as:

Code:
(a)  (b)  (c)    Outcome#1  Outcome#2  Outcome#3
0    0    0
0    0    1
0    1    0                   *
0    1    1
1    0    0        *                     *
1    0    1        *
1    1    0                   *          *
1    1    1
We can now imagine a situation where we do this three-coin-flip experiment over and over, each time flipping the coin three times. We count (N) how many times each of our outcomes of interest occur, recognizing (according to the above table) that there will be occasions where more than one outcome happens during a single three-flip experiment. With this information, we can state the following inequalities:

N(a ∧ !b) + N(b ∧ !c) ≥ N(a ∧ !c)

(Be sure to think this through. If we trust probability theory, and common sense, this has to be true.)

This is self-evident by the fact that (a ∧ !c) cannot occur unless either (a ∧ !b) or (b ∧ !c) also occurs. It’s straightforward to turn each term in the above inequality into a proportion, by simply dividing by the overall N (number of times we did the experiment). This transforms the inequality into:

p(a ∧ !b) + p(b ∧ !c) ≥ p(a ∧ !c)

This is actually one form of Bell’s inequality, and I hope everyone is somewhat comfortable with the above outlined arguments. Now, I want to extend this argument to something we can rather easily generalized to randomly linearly polarized photons. (For this argument, let’s ignore the cases of circular and elliptical polarization, but just FYI, they do not invalidate any of the arguments.) We might say that our photons are being emitted one-at-a-time from some emitter in direction z, and that they are randomly (linearly) polarized at some angle in the x,y plane, the plane is orthogonal (normal) to the direction of progression (z).

For visualization, let’s say we have a magic (golden) disk that emits brass bars (at the speed of light, if you like) once a second at random angles (orthogonal to progression):

Let’s further assume that we have three “events” (a, b, and c) that we wish to test (very much like flipping our coin. We would like to “test” whether or not our bar passes these events. When placed orthogonal to the angle of progression, we would like to see if our brass bars will pass through disks that look like one or more of the following:

The following is an example of the copper rod passing test a (where the z axis is now pointing directly into the picture).

Now, to illustrate Bell’s inequality, we can stack our tests, and we’d have something like the following:

We can now imagine an emitted rod that passes a (red) but fails (bumps into) b (green). If the rods are randomly “polarized”, then the proportion of the time that will happen is the area of green we can see compared to the entire circle. This works out to be a linear relationship. It is simply the difference in angle between event a and event b, which is 22.5°, divided by 360°. This works out to be .0625. So, outcome#1 will happen about 6.25% of the time.

We can also work out outcome#2, (b ∧ !c) and not caring about a:

This is also a 22.5° difference, which will also happen about 6.25% of the time.

Therefore, returning to Bell’s inequality, outcome#3 cannot happen more than 12.5% of the time.

Bell’s inequality:
p(a ∧ !b) + p(b ∧ !c) ≥ p(a ∧ !c)
.0625 + .0625 ≥ p(a ∧ !c)
.125p(a ∧ !c)

Now, let’s look at the (a ∧ !c) situation with our disks (outcome#3):
This time, the “exposed” blue (c) area represents a 45° difference in the disks. 45°/360° = .125, just exactly on the border of still being within Bell’s inequality. In fact, it’s the equality version of Bell’s inequality.

Therefore, according to everything I’ve presented, Bell’s inequality holds.

I’ll stop here until another post, but I can absolutely tell you that a randomly linearly polarized photon will get through event a .5 proportion of the time, and a photon will get through event b after getting through event a .1464 proportion of the time (sin2(22.5°) ). Multiplying these together (.5 × .1464) we determine that .0732 is the proportion of observing outcome#1.

The same logic can be worked out for outcome#2 (occurring .0732 proportion of the time) while ignoring event a altogether.

Now this is where things get interesting. Still using photons, it can be shown that outcome#3 (a ∧ !c) will occur with a proportion of .25. We work this out with the knowledge that the photon will get through test a with a .5 proportion. After passing a, it will get through c .5 proportion of the time (sin2(45°)). Multiplying these together, we get .5 × .5 = .25.

Plugging these into Bell’s inequality, we get:

p(a ∧ !b) + p(b ∧ !c) ≥ p(a ∧ !c)
.0732 + .0732 ≥ .25, which is NOT correct. Bell’s inequality is violated.

I’ll explain this more in a subsequent post if there are interesting posts by others. It should be recognized that this does NOT involve EPR pairs. It is a single photon experiment that illustrates a violation in Bell’s inequality in the quantum world.

There is a fairly straightforward “loophole” that people have used to attempt to explain this violation. Can anyone explain it? I’ll give a tip. It has to do with distinctions in what happens with measurements in the classical world versus the quantum world. If interested, I’ll explain it.

And, if interested, I'll carry these explanations forward into the EPR experiments that conclusively show that local causality (locality), even when allowed to travel at the speed of light, can not explain empirically observed quantum phenomenon.

Last edited:

#### Jilang

Elroy, I have noticed that these inequalities can be expressed and understood nicely in the form of Venn diagrams.
(A+B+C+D)> (A+D) etc.
I also notice that if you take the square root of each of the terms the inequality still holds.
In your example 0.0732 + 0.0732 > 0.25 is violated, but 0.27 + 0.27 > 0.5 holds.
The difference between the classical and the quantum seems to arise due to the probability being the square of the amplitude. If the amplitudes are considered in the Venn diagrams instead there seems to be no issue. I am wondering if the difference arises due to the equating of the probability with some inherent property rather than equating the amplitude with that property?

#### Elroy

Jilang, your idea of taking the square root is interesting, but we can't just do arbitrary mathematical manipulations. In other words, all of our equalities (or inequalities) have to be grounded in empirical experimental evidence. Sure, we can set up mathematical models upon which to form hypotheses, but we must still state how these mathematical models would be empirically (experimentally) tested.

Hopefully, in post #61, I outlined how a bar emitted from a "magic disk" at some angle (orthogonal to direction of progression) would work out to Bell's inequality. In fact, it would be fairly straightforward to do an experiment like this. We could have some motor that spun and then slowed to a stop such that it stopped at completely random orientations (to the axis of the motor). We must also imagine a bar attached orthogonally to the motor's axis. Then, once stopped, the device ejected the bar directly away from the end of the axis.

Next, it came into contact with our red (test a), green (test b), and/or blue (test c) plates. We might want to do this in space, or have the axis of the motor pointing straight down, so that gravity didn't mess us up. However, hopefully, we can see that it could be done.

Then, after "dropping" many bars, we would calculate up our Outcome#1 (a∧!b), Outcome#2 (b∧!b), and Outcome#3 (a∧!c) probabilities, all empirically derived. In this case, over the long run (asymptotically) we would see that Bell's inequalities would hold. Furthermore, we would see that changing our angles would change the probabilities in a linear fashion. In general, it's always just dividing the difference in angles by 360° to find the individual terms of Bell's inequality.

Now, regarding the quantum (photon) situation, we again rely on empirical observations of experiments, and then attempt to derive mathematical "models" of these results. I'll focus on the Outcome#3 (a∧!c) situation because this is the one that's possibly most interesting. Experiments tell us that photons do not act like our brass rods, and this is where things get interesting. Initially, we might imagine our photon as having random linear polarization approaching a vertical polarizer. Experimentally, we know that 50% will get through. This actually is just like the brass rod. However, when a photon is "measured" (tested, test a), it is simultaneously re-oriented to have precisely vertical polarization, even if the polarization was initially off from vertical. This is the whole idea in QM that things can't be measured without also "changing" them. Therefore, once a photon is measured as to whether or not it's vertically polarized, after the measurement it will be precisely vertically polarized (if it passes the test).

Now test c is actually a test of whether or not the photon is polarized at 45°. Classically, we go (45° - 0°)/360° to get a .125 probability of (a∧!c) (Outcome#3). However, this isn't how things work out in the quantum world. Empirically, it has been shown that a photon of previously "known" polarization will pass through a polarization filter of a known angle rotation from the "known" polarization exactly 1-sin2(θ) of the time, where θ is the difference in angles between the "known" polarization of the photon and the angle orientation of the polarization filter.

Therefore, we know our photon will "pass" our test a .5 proportion of the time. And now, using our 1-sin2(θ) rule that has been empirically verified in many experiments, we know that the photons who passed test a (and are then vertically polarized) will "pass" our test b 1-sin2(θ) = 1-sin2(45°) = .5 proportion of the time (and therefore "fail" test b .5 proportion of the time). Therefore, our outcome#3 is .5×.5=.25.

The most interesting thing is that this .25 is different from the classical .125 proportion. In fact, it's doubled. Photons will meet the criteria our proposed outcome#3 at a rate that is twice that of our brass rods. This is precisely due to the fact that "measuring" photons (having them pass through filters) appears to alter them, whereas we assumed that measurements on our brass rods did not alter them (specifically, alter their angular orientation).

Again, this isn't just playing around with math. This is using math to model empirical/experimental findings.

Regards,
Elroy

#### Jilang

Elroy, it seems we have all the experimental evidence already and the maths for it. It's the physical mechanism that's missing. Are you suggesting that the act of measuring in some way rotates the plane of polarisation?

#### Jimster41

Gold Member
Are you suggesting that the act of measuring in some way rotates the plane of polarisation?
Just trying to understand this myself (to the degree it is possible). I just started Russkind's "Quantum Mechanics: The Theoretical Mininum". In the opening chapters he goes through the difference between classical and QM expectations from measurements of polarization - and I got, for the first time, a mental cartoon of the "Preparation" step, which as I understand it, is irrelevant in the classical expectation, but is inextricably tied to the experimental results in the QM case. So my hip-shot answer to the above was "exactly!". I hope this is right (or at least not wrong). I now have a mental cartoon of the measurement step of a sequence of experiments interacting with the result sequence in a bidirectional way, which I didn't have quite before. Still staring at it.

I'm also reading Gisin's book. On page 88 in a section titled "Alice and Bob each Measure Before Each Other" he goes into the questions when Bob and Alice are in different rest frames. what does simultaneous measurement mean, and if measurement is not simultaneous then who measures first and which result (Bob's or Alice's) determines the other? I haven't read (or tried to read it) but here's a link to the paper they did on the experiment to test the case: http://arxiv.org/abs/quant-ph/0210015. I gather QM won, but I'd be lying if I said I really get what that means - except that GR and QM are in conflict? (Later Edit: No that's wrong I realize, going back some pages in Gisin p50. GR is not in direct conflict with QM because no meaningful information can be transmitted" via the "non-local" thingamajig. He goes on to outline, somewhat euphemistically, their "peaceful coexistence".)

In the next section Gisin goes on to talk about "Superdeterminism and Free Will", so I guess that's a hint.

Which confirms for me somewhat, that the question I was failing to ask coherently, or in proper terms (Bob leering at Alice because he's predetermined the results of her "experiment"), was at least a good one after all (which is a relief). Wish I felt like there was an answer to it though! I should add that I love Gisin's answer, which I gratuitously paraphrase as "If there is no free will, why are there questions?" ;-)

So here's another one that's bothering me today. And I recognize I need to better understand the difference between superposition and entanglement. But what happens to a QM Probability Distribution when the space it occupies is expanding (when there is a + Cosmological constant?). Is the new region "entangled", in "extended superposition", the exact same thing as it was prior to expansion? Is that an incomprehensible (unintelligible) question? Or is it a comprehensible one with a simple answer or, though comprehensible, one that just goes right off into philo-puzzle-land?

Elroy, I got from your description of the sort of simple coin toss model, the same sensation of understanding I got from Gisin's description - which I have, I hope not too incorrectly, taken away as "No two independent random things can be coordinated more than a certain % of the time - just because of what random means - which is, out of two ways, a random thing is one way half the time and the other way half the time". I haven't seen the Venn diagrams of Bell's Inequality. I'm betting I would like them.

Last edited:

#### Elroy

Ok, I'll comment on Jimster's post a bit first. Wow, truth be told, I'm still trying to get rock solid on all the implications of the wave function (and the whole wave-particle duality thing). I'll start thinking about Lorentz transformations and different inertial frames of reference once I get these things clear. If I try and sort it out all at once, my head will explode. So, I guess I'm proposing that everything in post #61 is happening in the same inertial frame of reference. In fact, everything I'm talking about has to do with a single photon. So, I guess we could say that we're just talking about Alice, and the heck with Bob.

And now to Jilang.
Are you suggesting that the act of measuring in some way rotates the plane of polarisation?
Yes, that's exactly what I'm proposing. In fact, we know this rather conclusively. As an example, we can take randomly polarized light (such as sunlight) and exactly half of it will get through a linear polarizing filter.

However, now let's imagine that we have two linear polarizing filters, one oriented vertically and the other oriented horizontally. If we put them together, no light will get through. But how is this possible? For any single filter, 50% of the light will get through. So, if the first filter didn't "change" the light (i.e., change the polarization), then 50% should get through the first filter, and then 50% of that 50% should get through the second filter (with 25% coming out after both filters). But again, we know that this doesn't happen. 0% of the light gets through. That pretty much proves that the measurements somehow alter the polarization of the light.

Now, to make things even more weird, let's say we still have our two linear polarizers (one oriented vertical (V) and the other oriented horizontal (H)). Now, let's take a third linear polarizer and orient it at 45° (half way between vertical and horizontal). If we slip it between the V and H polarizers, 12.5% of the light will get through, whereas with only the V and H polarizers, NONE of the light got through. I'll leave the reasoning for that up to you.

Bottom line, in the quantum world, taking measurements virtually ALWAYS alters things.

Regards,
Elroy

#### Elroy

Just to say a bit more, that's the "opening of loophole #1" in explaining how QM is different from classical situations. In the classical world, we can measure things without changing them. I can measure the width, depth, and height of my refrigerator without changing it. However, to explain things (just like what I outlined in post #66), we must admit that measurements DO change things in the quantum world.

But all of that opens the door for the EPR pairs paradox. With perfect correlations between Alice and Bob, we can argue that Alice's measurements not only "change things" for her, but that they ALSO "change things" for Bob. How is THAT possible? We can also set things up such that the changes happen faster than the speed of light.

#### Jimster41

Gold Member
we must admit that measurements DO change things in the quantum world.

But all of that opens the door for the EPR pairs paradox. With perfect correlations between Alice and Bob, we can argue that Alice's measurements not only "change things" for her, but that they ALSO "change things" for Bob. How is THAT possible? We can also set things up such that the changes happen faster than the speed of light.
Doctor, I concur.

Also, I apologize again for butting in here with my questions. It's just been an interesting discussion to follow. I feel like I'm in your boat.

The book by Gisin, though a "popularization" (by a guy who won the Bell Prize and invented Portable Quantum Cryptography or something) has been a real help for me I might add... though... I can imagine you might also find it disappointing on one level... the conundrum of just how "non-local randomness" (Gisin's words) can be... is not less sharp after reading it... it is more. And I feel betrayed in small part because I thought this guy was going to explain how it could be... he just made it more unavoidably clear... that it is! and the pros all have their brains on bust trying to figure it out... at this very moment... wherever that is.

Carry on...

Last edited:

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving