Quantum Annoyance (EPR and Bell's Inequality related).

In summary: What this experiment measures is the correlation between the two detectors.The Mermin experiment has:Two detectors.Eight bins.Each bin has three slots.Each slot may either be a Red slot or a Green slot.Each of the two detectors also has two lights on top: a Red light and a Green light.This is what makes up the Mermin experiment.What happens is two singlet-state particle are emitted from the emitter. Each particle shoots toward one of the two detectors.The particle will then fall upon any of the eight bins (a "bin" is a3-slot receiver coded with R's and G's
  • #1
Bible Thumper
88
0
I'm still having difficulty trying to make sense of this data:

31RR 12GR 23GR 13RR 33RR 12RR 22RR 32RG 13GG
22GG 23GR 33RR 13GG 31RG 31RR 33RR 32RG 32RR
31RG 33GG 11RR 12GR 33GG 21GR 21RR 22RR 31RG
33GG 11GG 23RR 32GR 12GR 12RG 11GG 31RG 21GR
12RG 13GR 22GG 12RG 33RR 31GR 21RR 13GR 23GR

It may look familiar to some of you; it came from here. This set of data in particular is important because it says that whenever the electrons both have +1/2 spin or both have -1/2 spin, the lights will both be Red or Green.

How can we prove this by creating a computer program? Also, can someone give me an heuristic explanation how this phenomena actually takes place? In other words, why do the lights light 1/2 the time rather than 5/9'th the time?
 
Physics news on Phys.org
  • #2
Bible Thumper said:
This set of data in particular is important because it says that whenever the electrons both have +1/2 spin or both have -1/2 spin, the lights will both be Red or Green.
The data represents the results of performing Mermin's gedanken experiment many times. One thing that it says is that when both detectors are set to the same angle, the outcomes (Red or Green) are 100% correlated.

How can we prove this by creating a computer program?
What do you mean "prove this"? The results of the experiment are arranged to be consistent with what quantum mechanics tells us.
Also, can someone give me an heuristic explanation how this phenomena actually takes place? In other words, why do the lights light 1/2 the time rather than 5/9'th the time?
What do you mean by "heuristic explanation"? If you're asking if there's a simple way to understand how this could happen using some kind of local mechanism (see Mermin's "instruction sets"), the point of the article is that there is no such local mechanism that can reproduce those results.

The quantum mechanical algorithm for producing the data is explained in the paper.
 
  • #3
Doc Al said:
The data represents the results of performing Mermin's gedanken experiment many times. One thing that it says is that when both detectors are set to the same angle, the outcomes (Red or Green) are 100% correlated.What do you mean "prove this"? The results of the experiment are arranged to be consistent with what quantum mechanics tells us.

What do you mean by "heuristic explanation"? If you're asking if there's a simple way to understand how this could happen using some kind of local mechanism (see Mermin's "instruction sets"), the point of the article is that there is no such local mechanism that can reproduce those results.

The quantum mechanical algorithm for producing the data is explained in the paper.

I think what I was trying to say was that we can create a computer algorithm around the Gedanken experiment.

For example, we can randomly generate a triune listing of the R and G lights (such as the spreadsheet I have in the OT). Then randomly generate the switch setting, 1 through 3.
Then see if we can get the R and G lights to flash the same color 1/2 the time, as experiment shows, or 5/9ths the time, as is presumed by Bell's Inequality.

Assuming we make a computer algorithm that represents the singlet-state particle with a random spin, and that particle ends up in a computerized representation of the bin at random (the triune R and G bin--my spreadsheet in the OT had 45 such 'bins', or R-G combinations), and that the bins are ascribed to a computerized representation of the randomly-generated 1 through 3 switch setting, can we get agreement with this computer model as we do in the Gedanken experiment?

IN OTHER WORDS:

The Gedanken experiment has:
  • A singlet-state particle emitter. It emits two particles at a time.
  • Two detectors. Each detector has:
    [*]Eight bins. Each bin has three slots.​
    [*]Each slot may either be a Red slot or a Green slot.​
  • Each of the two detectors also has two lights on top: a Red light and a Green light.
This is what makes up the Gedanken experiment.
What happens is two singlet-state particle are emitted from the emitter. Each particle shoots toward one of the two detectors.
The particle will then fall upon any of the eight bins (a "bin" is a3-slot receiver coded with R's and G's. Therefore, a singlet-state particle will reach anyone of eight bins that have coded on it: RRR,RGG GGG GRR, etc.
These bins themselves are hard-wired to three detector switch settings (switch settings 1 through 3), and the switch settings are assigned completely at random (so if a particle lands on a G in the three-slot bin, and if the G happens to be hardwired to a switch setting of 1, the Green light on top of the detector will light).
After the singlet-state particle falls on its bin, it will trigger either the Red light to flash or the Green light to flash, depending on whether or not the singlet-state particle hit an R or a G, and wheter or not that R or G was hardwired to the randomly-selected switch setting.

This can all be recreated using a simple computer program (or computer algorithm, if you will). Simply replace:
  • The emitter of the singlet-state particle with a random number generator (the "RNG") that randomly generates a 1/2 or a -1/2. There will be a 50-50 percent chance of this random number generator generating a 1/2. Obviously, the results are correlated, as they are in nature.
  • Create another random number generator for each of the eight bins; a '1' and a '2', each in numbers of three. Thus, the RNG will generate, at random, numbers like: 112 122 111 211 122 221 222, etc. Replace the bins with this number generator.
  • Create yet another random number generator that generates a 1, 2 or 3 at random. Replace the switches and their switch settings with this number generator.

Now run this computer program, under the same way the Gedanken experiment was run.
Will we get the lights to flash R 1/2 the time and G 1/2 the time, as they did in the Gedanken experiment?
 
  • #4
Bible Thumper said:
Now run this computer program, under the same way the Gedanken experiment was run.
Will we get the lights to flash R 1/2 the time and G 1/2 the time, as they did in the Gedanken experiment?
Short answer: No.

I'm going to assume that your computer program is an attempt to duplicate the gedanken experiment using Mermin's "instruction sets" assigned randomly to the particle pairs, with detector readings that depend only upon the random setting of the detector and the random instruction set of the particle that it detects. If so, the results of the computer simulation will not duplicate the results of the original gedanken experiment with the "Mermin Contraption" (which represents the results of real experiments).
 
  • #5
As Doc Al mentions, no deterministic computer program can yield results consistent with actual experiment. The only way to get results which match experiment is to bias them according to the choice of which 2 measurement settings are chosen. But as long as those are selected randomly or otherwise without any preferential bias, you won't be able to get results that match either QM or experiment.

Based on this outcome, you are forced to a) reject realism (i.e. that there are instruction sets); b) reject locality; or c) live in denial (hey, a lot of people fall in this category at some point). :)
 
  • #6
DrChinese said:
As Doc Al mentions, no deterministic computer program can yield results consistent with actual experiment. The only way to get results which match experiment is to bias them according to the choice of which 2 measurement settings are chosen. But as long as those are selected randomly or otherwise without any preferential bias, you won't be able to get results that match either QM or experiment.

Based on this outcome, you are forced to a) reject realism (i.e. that there are instruction sets); b) reject locality; or c) live in denial (hey, a lot of people fall in this category at some point). :)

We know we can create a computer program based on SR or GR and that program can even show us what things look like in intensely powerful gravitational fields or under very high rates of relative speeds.
We can also create all kinds of computer programs that likewise show how Mother Nature works: Young's double-slit can be duplicated via computer, nuclear fusion, stimulated emission in a laser, etc.

So why can't we do the same when mimiking the Gedanken experiment in a computer program? Could it be that the bias to which you alluded to can be the missing piece in our understanding of this apparently non-localized physical phenomena?
We can even make computer programs around the uncertainty relation. Why not this Gedanken experiment? It's like the only thing we can't replicate on computer for some reason!
 
  • #7
Of course you can create a computer program to duplicate the expected (and verified) results of quantum mechanics. You just can't create one using "instruction sets" in the manner you described. As DrChinese said, you'd have to build in a bias between the measurements of the paired particles--per the quantum prescription.
 
  • #8
Doc Al said:
You just can't create one using "instruction sets" in the manner you described. As DrChinese said, you'd have to build in a bias between the measurements of the paired particles--per the quantum prescription.

What is your justification for saying a program cannot be made to reflect the results (lights both flashing same color 1/2 the time instead of 5/9 the time) of the Mermin contraption? Is saying, "Bible Thumper, it just won't work if you don't have instruction sets, OKAY?" justification enough?

Doc Al, Dr. Chinese--feel free to explain how a computer might have "bias built in" to the program to execute results identical to the Gedanken experiment. How might we write the code for the program? Explaining specifically what this "bias" will look like as code would be helpful. What would this "bias" look like? We know what everything else looks like (SR, GR, stimulated emission, etc) when we write code for it. Or can it not be done because we don't know how to write code that would reflect the characteristic of this "bias" due to our failure in understanding the nature of the "bias"?

Also, I found this paper, and it's a fairly recent one. It tells me that seasoned physicists to this day are still struggling with this very same problem!:

Chapter 2: Case Study – The Aspect Experiments
2.1: Introduction
In a 1981 article published in the Physical Review Letters – co-authored by Alain
Aspect, Philippe Grangier and Gérard Roger – the findings of the then most recent of a
series of experiments designed to detect Bell violations were summarized in a triumphant
decree: “[The] results, in excellent agreement with the quantum mechanical predictions,
strongly violate the generalized Bell’s inequalities, and rule out the whole class of
realistic local theories.”
16 More than ten years later, in a 1994 edition of the Physical
Review A, Paul Kwiat and colleagues from the University of California, Berkeley,
preferring a less optimistic rendition of the situation, presented the following contradictum:
“…to date, no incontrovertible violation of Bell’s inequalities has been
observed.”

The fact that such disparate opinions punctuate the professional discourse
associated with a single scientific hypothesis should stand both as evidence for the
complexity of the theoretical and technical considerations bound up with its evaluation,
and as a signal for philosophers concerned with the methodology of the physical sciences
to stop and take notice.

Can you guys help me instead of saying it can't be explained?

P/S: Woohoo! My 77th post! 700 more posts to go until my favorite number! W00t w00t! :)
 
Last edited:
  • #9
Bible Thumper said:
What is your justification for saying a program cannot be made to reflect the results (lights both flashing same color 1/2 the time instead of 5/9 the time) of the Mermin contraption? Is saying, "Bible Thumper, it just won't work if you don't have instruction sets, OKAY?" justification enough?
If you model the experiment by randomly assigning "instruction sets" to the particles and have those instruction sets determine the particle measurements, then you will not get results that agree with quantum mechanics. This is clearly explained by Mermin in the paper you quote--did you actually read it?

If you understand Mermin's argument, then there's no need to write a computer program to demonstrate such. (See the top of page 44 in the original paper.) But instead of talking about it, why don't you just write it and see! You should have no trouble writing such a program; no knowledge of quantum mechanics is required.

Doc Al, Dr. Chinese--feel free to explain how a computer might have "bias built in" to the program to execute results identical to the Gedanken experiment. How might we write the code for the program? Explaining specifically what this "bias" will look like as code would be helpful. What would this "bias" look like? We know what everything else looks like (SR, GR, stimulated emission, etc) when we write code for it. Or can it not be done because we don't know how to write code that would reflect the characteristic of this "bias" due to our failure in understanding the nature of the "bias"?
Again, writing a program that models the results of quantum mechanics (and Mermin's Gedanken experiment) is straightforward. Here's one way. Randomly generate detector settings (1, 2, or 3) for each detector. Then randomly have one of the particles measured as R or G with 50% probability. Then randomly assign the measurement of the other particle per the quantum prescription--again, explained in the paper! The probability of the two particles having the same measured value is given by cos²θ, where θ is the angle between detector settings. (Detector positions 1, 2, & 3 correspond to angles 0, 120, and 240 degrees.)

Also, I found this paper, and it's a fairly recent one. It tells me that seasoned physicists to this day are still struggling with this very same problem!
I assure you that none of the (ever diminishing number of) physicists worrying about loopholes in Bell experiments would have the slightest problem in understanding Mermin's argument. In fact, Mermin's argument provides their motivation for (desperately) seeking such loopholes.
Can you guys help me instead of saying it can't be explained?
What do you mean "it can't be explained"? What you can't do is explain the Bell results using a local model of particle-detector interaction where the results of the measurement depend only on some local variable that travels with the particle.
 
  • #10
There seem to be misunderstandings here. There are two different statements.

The first one is: can one write a computer program that reproduces the outputs of ideal EPR experiments in a statistical way ? And the answer is of course an obvious yes. That answers the question: can we distinguish between the datastream of a "real" ideal EPR experiment and of such a computer program ? And there the obvious answer is: no. As a computer can perfectly reproduce the statistical properties of such a dataset, there's no statistical test that is going to make the difference.

The second statement is: could we write a computer program that generates the outcomes following an algorithm in which the outcomes are calculated as a function of independently generated particle data, and detector response, when the detector choice is random and not dependent on the particle data (and the particle data are not dependent on the detector choice) ? Then the answer is no. The proof is Bell's theorem.

This is what Doc Al is saying, but I don't know if the OP sees the difference between both. In the second case, we put specific constraints on the algorithm and it are these constraints which make the task impossible.
 
  • #11
Bible Thumper said:
1. What is your justification for saying a program cannot be made to reflect the results (lights both flashing same color 1/2 the time instead of 5/9 the time) of the Mermin contraption? Is saying, "Bible Thumper, it just won't work if you don't have instruction sets, OKAY?" justification enough?

Doc Al, Dr. Chinese--feel free to explain how a computer might have "bias built in" to the program to execute results identical to the Gedanken experiment. How might we write the code for the program? Explaining specifically what this "bias" will look like as code would be helpful. What would this "bias" look like? We know what everything else looks like (SR, GR, stimulated emission, etc) when we write code for it. Or can it not be done because we don't know how to write code that would reflect the characteristic of this "bias" due to our failure in understanding the nature of the "bias"?

2. Also, I found this paper, and it's a fairly recent one. It tells me that seasoned physicists to this day are still struggling with this very same problem!: ...

We have some philosophical issues mixed in here among the ones related to physics.

1. No one knows the answer to why the laws of physics are as they are. Bell made a discovery that precludes a computer program of the type you describe, and Mermin explains why Bell's discovery does so. There are 2 pieces to his proof:

a) The lowest correlation ratio (i.e. matches) is 5/9, or .555. Do you follow Mermin's logic for that? If you are not sure, I can easily explain it and would be happy to do so.

b) The Quantum Mechanical prediction is 1/2. This is simply the addition of two weighted probabilities:

i) Same settings occurs 3/9 of the time: 11, 22, 33. The QM prediction for these cases (0 degrees apart) is cos^2(0 degrees) or 1.0 (100%).
ii) Different settings occur 6/9 of the time: 12, 13, 21, 23, 31, 32. The QM prediction for these cases (120 degrees apart) is cos^2(120 degrees) or .25 (25%).

So we weight the probabilities as follows:

QM = i) + ii)
= (3/9 * 1.0) + (6/9 * .25)
= .333 + .167
= .500

So the "instruction set" prediction is at least .555, while the QM prediction is .500. This explains why the computer program (using instruction sets) will never work using the ground rules provided. If you violate the ground rules, of course, as has already been mentioned, then you can write such a computer program.2. There are SOME scientists who believe the perfect Bell experiment has not yet been run. Of course, there are also scientists who believe the perfect test of General Relativity has not yet been run. There is really no serious controversy, it is more a matter of degree and semantics. Ask anyone who has tried, the major scientific journals routinely refuse to publish theoretical studies postulating local realism (i.e. in defiance of Bell's Theorem). This should indicate that the conclusion is on very solid scientific ground.
 
  • #12
DrChinese said:
Ask anyone who has tried, the major scientific journals routinely refuse to publish theoretical studies postulating local realism (i.e. in defiance of Bell's Theorem). This should indicate that the conclusion is on very solid scientific ground.

:rolleyes: :eek: :bugeye: I really hope that they are rejected because they are wrong and not because of consensus reasons!
 
  • #13
vanesch said:
:rolleyes: :eek: :bugeye: I really hope that they are rejected because they are wrong and not because of consensus reasons!

I couldn't say... we saw about Peter's travails recently.

I think that Bell+Aspect has put a tremendous burden on the counter argument, and it is going to take a well-fashioned and well-developed hypothesis to get much attention. As a practical matter (at least for entanglement scenarios), there is nothing at all wrong with QM and testable differences are hard to come by.

Of course, there is a lively sub-culture on arxiv.org regarding this subject and there seems to be no end to the viewpoints expressed. I see at least one "disproof" of Bell every few months. So I don't think anything too critical is being ignored.
 
  • #14
Doc Al, Vanesch and Dr. Chinese in particular, I am indebted to you guys. Thanks, Dr. Chinese for the excellent and clear explanation you provided me in post #11; you explained it when Mermin could not (this should tell you how concise and pithy your explanations are, and that perhaps it may be in your interest to take up writing on subjects related to physics).
Again, please accept my abundant thanks! :)
 
  • #15
But Dr. Chinese, I'm still having difficulties. You cleared up my interpretation of the 5/9'ths versus 1/2's,
(this is what you had to say:)
The Quantum Mechanical prediction is 1/2. This is simply the addition of two weighted probabilities
...Which totally cleared that problem up for me.
but still, what about those same-result detectors (the times when both detectors get the same switch settings ; 1 thru 3) getting both G's OR both R's to flash?
How can it be that one detector will get, by random, a "1" for a switch setting; the other detector likewise getting a "1", and they both flash the same exact color light?!
How on G-d's green Earth is this possible?

The only thing I don't know is why should we get a same-color flash for a same-switch setting even though there is no instruction set.
 
  • #16
DrChinese said:
I couldn't say... we saw about Peter's travails recently.
That's me?!?! That I couldn't get a paper published in Nature, Nature Physics, Science, or Physics Today is not a surprise. I was trying to create a writing style that would make the jump to the big time, and didn't make it; I probably fell far short. If a paper of mine on quantum theory got in any of those, I would be celebrating; it didn't, so I will be reshaping my ideas over time, again, ... That's arXiv:0810.2545 [quant-ph], "The straw man of quantum physics", also on PF. But I got J.Phys.A to publish what to me is a definitive paper on "Bell inequalities for random fields", two years ago. J.Phys.A is a major Physics journal to most people (at the time, I searched J.Phys.A for papers on Bell inequalities, and found that they have only papers about Kaon experiments). Then there's a paper in last December's J.Math.Phys., developing why Physicists might or ought to be curious about what can be done with classical continuous random fields. A gradual progression is OK. Always aim for five or ten years time, never expect people to understand today what you're trying to do. Stop if you feel yourself getting bitter that people don't appreciate how brilliant you are. My newest attempt was posted on Monday at the FQXi essay contest web-page; I was quite pleased that the requirements imposed by that contest resulted fortuitously in what is for me a conceptually new starting point. Another day.

Did you not mean me, Dr Chinese? Oh well. Anyway, for me, not being published is not because of prejudice, it's because if it was easy it would have been done already. It's hard, the new concept of random fields has to be developed, and Physicists are too busy teaching, developing their own ideas, and doing departmental administration -- they have jobs -- so sometimes they make quick decisions on stuff they haven't seen before and have no investment in. Also, I don't write well enough, etc., etc., and I may just be barking round the wrong idea. Work at it!

To the topic of the OP -- always a good idea -- you can try the work of Hans De Raedt's group, who have been publishing papers in various specialist journals about an interesting computational model that violates Bell inequalities. It has failings, but it's still interesting to understand why it can be said to fail, and it's better than most attempts. You can see my attempt at an assessment in arxiv:0801.1776 [quant-ph], which may help as a counterpoint to their point of view (this paper was rejected, rightly, by Phys.Rev.A because it doesn't do anything new; I didn't bother to try to publish it somewhere else).
 
  • #17
Guys: Is it true that without the experimental observing of the phenomena of entanglement, Bell's Inequality could never have been violated? Does it require a demonstration of entanglement and only entanglement to violate Bell's Inequality or can it be violated in alternate ways--ways that don't require entanglement?
 
  • #18
Peter Morgan said:
That's me?!?! That I couldn't get a paper published in Nature, Nature Physics, Science, or Physics Today is not a surprise. I was trying to create a writing style that would make the jump to the big time, and didn't make it; I probably fell far short. If a paper of mine on quantum theory got in any of those, I would be celebrating; it didn't, so I will be reshaping my ideas over time, again...

Peter,

I was referring to you, but definitely not in a bad way! I simply meant that it is a tough hill to climb, and that is a function of the success of QM and the power of Bell's Theorem.

I definitely don't mean to imply in any way that the scientific "establishment" has it in for anyone on this subject, or that good arguments are not being heard. I believe there is more to be learned, and someone will discover something that will enlighten us all.

-DrC
 
  • #19
Bible Thumper said:
Guys: Is it true that without the experimental observing of the phenomena of entanglement, Bell's Inequality could never have been violated? Does it require a demonstration of entanglement and only entanglement to violate Bell's Inequality or can it be violated in alternate ways--ways that don't require entanglement?

Bell's Theorem does not require experimental support. Its conclusion is essentially as follows:

No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.

You DO need entanglement to perform a Bell Test. The reason for that is that you are essentially limited by the Heisenberg Uncertainty Principle from knowing about a single particle's non-commuting observables. Entanglement was a "back door" way around that, as envisioned by EPR. But it turns out that does not work after all.

Experimental violation of the Inequality shows that local realistic theories are not tenable.
 
  • #20
Bible Thumper said:
Guys: Is it true that without the experimental observing of the phenomena of entanglement, Bell's Inequality could never have been violated? Does it require a demonstration of entanglement and only entanglement to violate Bell's Inequality or can it be violated in alternate ways--ways that don't require entanglement?

I think your question may be mixing up conceptual systems a little, BT. Bell inequalities can be constructed for classical particle property models with essentially no additional assumptions, and for classical random field models if we impose quite strong assumptions on them. Bell inequalities are violated by experiment. The empirically effective quantum mechanical models for experiments that violate Bell inequalities are entangled.

I note, however, that entanglement --- superpositions of the tensor products of states of two subsystems --- is possible for a classical continuous random field, so entanglement does not distinguish quantum from classical in the conceptual arena of fields. That's because continuous random fields are mathematically almost identical to quantum fields. True that it's difficult-or-impossible to make entanglement make easy sense for simple-minded classical particle property models, but almost all Physicists set aside such models long ago.

I think your question won't help you understand the experiments or the models, though I'm not certain.

Piggy-backing a brief response to DrC here, I wanted to respond as much to the negativity of the post you were responding to as to your comments on my travails, to which I took no offense.
 
  • #21
A paper I read from Dr. Chinese said that Bell already was aware of the property of entanglement when he came out with his theorem. Can it not be said that Bell created his idea around the property of entanglement specifically and not necessarily for the EPR in general?
If this is the case, we can almost regard Bell's theorem as a theorem for entanglement.
 
  • #22
Bible Thumper said:
A paper I read from Dr. Chinese said that Bell already was aware of the property of entanglement when he came out with his theorem. Can it not be said that Bell created his idea around the property of entanglement specifically and not necessarily for the EPR in general?
If this is the case, we can almost regard Bell's theorem as a theorem for entanglement.
With the rise of interest in quantum computation, there has been a lot of research in recent years into the precise nature of various classes of states and operators. There are lots of papers giving rigorous results about all sorts of situations that are generally not reducible to simple statements, so I think you need to understand at least that literature relatively well, which I don't, before deciding that just thinking about entanglement is a good way to understand quantum theory. DrC?

Also, what Bell thought about his theorem in the 1960s was pretty definitively the state of the art at the time, but lots of smart people have built a lot more stuff on his foundations, so best not to approach questions about quantum theory too historically. Gotta give the man his due, though.
 
Last edited:
  • #23
Peter Morgan said:
With the rise of interest in quantum computation, there has been a lot of research in recent years into the precise nature of various classes of states and operators. There are lots of papers giving rigorous results about all sorts of situations that are generally not reducible to simple statements, so I think you need to understand at least that literature relatively well, which I don't, before deciding that just thinking about entanglement is a good way to understand quantum theory. DrC?

Also, what Bell thought about his theorem in the 1960s was pretty definitively the state of the art at the time, but lots of smart people have built a lot more stuff on his foundations, so best not to approach questions about quantum theory too historically. Gotta give the man his due, though.

I definitely agree, Peter. There is a lot to think about with EPR, Bell, and the meaning of the QM formalism. There are many, many elements of interest. Also, my experience is that much opinion comes down to the precise meaning of particular words. It is easy to lose the physics in the semantic debate. In addition, the discoveries in the decades following Bell are many.

The reason I push the history is for simplicity: if you start with EPR and continue to Bell and then Aspect, you have a fluent "storyline" with a pretty good - and fairly easy to follow - "ending". Of course, it is actually FAR from the ending, and we really are still grasping at understanding the mechanics. I think the benefit of Bell is that we must accept that a simple classical view of quantum mechanics is not out there waiting to be discovered. Whatever it is, it will be non-classical in some significant way.
 
  • #24
DrChinese said:
Whatever it is, it will be non-classical in some significant way.

Well, it must be non-local, which is not too hard to take. But I hear that even non-local realism has been dealt a blow? Is that right?
 
  • #25
atyy said:
Well, it must be non-local, which is not too hard to take. But I hear that even non-local realism has been dealt a blow? Is that right?

A bit of controversy on this point. I personally think the GHZ theorem is a strike against realism but technically, like Bell, it is an attack on local realism. Ditto for Hardy's Paradox. The Kochen-Specker Theorem is an attack on certain flavors of realism, regardless of locality. But I don't think it absolutely closes the door in most folks' minds.

Personally, I would like to see a non-local realistic mechanism that explains directly how Bell tests work. So far, all the Bohmian/dBB type explanations simply claim predictive equivalence with the QM formalism and stop there (with respect to entanglement). Not much satisfying in that! My point being: I am not sure a non-local realistic mechanism can be formulated that agrees with QM AND explains how entanglement actually works - which is more or less the point of assuming non-locality. We may find out that such a formulation has "baggage" that can be tested... which would be very interesting!

So who knows... could we end up with both non-local AND non-realistic...?
 
  • #28
I personally always felt that the problem is "realism", I see no problem with locality since no entanglement experiment has ever to my knowledge indicated and FTL communication. And beyond that the notion of locality seems to be non-measureable anyway and only existing in the realist abstraction?

It seems to me the usual idea that "entanglement implies non-locality" is a direct consequence of the realism-idea. It is the supposedly elements of realism that are subject to the non-locality. For myself I think it's reasonably clear that the one thing that causes this headache is realism, not locality (if we are talking about information).

I vote for local non-realism :)

/Fredrik
 
  • #29
Why can particles only become entangled under carefully controlled conditions and only with the use of exotic crystals? Why can't we entangle particles with the ease with which we diffract particles?
Why can't we apply the mechanism by which entanglement works to the whole spectrum of QM, if entanglement is supposed to be the deciding factor in our definitions of physical reality (local versus non-local, etc.)? The mechanism by which entanglement works, why can't we use that in a QM explanation of double-slit diffraction, for example?

Modern double-slit diffraction involves superposition, which I'm uncomfortable with, but don't judge me by my discomfort with something...
 
Last edited:
  • #30
Also, I thought Heisenberg's Uncertainty Principle worked only because the photon that's doing the observing perturbs the particle (system), thus altering it. Is this idea correct? In this light I can see how the more we know about momentum, the less we know about position, and vise-versa. In other words, we have to measure momentum of a system using a photon (or some other unit of quanta). When we do that, the position becomes unknown, as the photon that did the observing interacted with the system, altering it.

This understanding of mine of the HUP seems to have a good, classical feel to it. Is it accurate? Or am I just that clueless?
 
  • #31
Bible Thumper said:
Also, I thought Heisenberg's Uncertainty Principle worked only because the photon that's doing the observing perturbs the particle (system), thus altering it. Is this idea correct? In this light I can see how the more we know about momentum, the less we know about position, and vise-versa. In other words, we have to measure momentum of a system using a photon (or some other unit of quanta). When we do that, the position becomes unknown, as the photon that did the observing interacted with the system, altering it.

This understanding of mine of the HUP seems to have a good, classical feel to it. Is it accurate? Or am I just that clueless?

I don't think that bounce picture is very good choice of abstraction. It is a typical realist-abstraction.

In the non-realist view, the idea is to only speak about what's known to the observer. The observer has a particular information. And the QM time evolution predicts how this information is expected to evolve. To ask to what extent the information is "correct" in some ultimate sense is IMO undefinable, because the only way to verify any information, is to put it to test by interaction. The feedback from interaction continously gives feedback to the observer.

To me the meaning of locality I think makes sense is that the possible actions of an observer is only affected by information at hand. This maintains a strong locality ideal, but totally does away with realism. This makes it plausible why particles behaves as a superposition of possiblities rather than just one of them. Because the actions of all parties of the game, are determined by their local information.

This doesn't have to be twisted, it can be quite intuitive - look at game theory. It doesn't matter what the real state of affairs are like, because each players acts upon the information at hand only. I think the best route to find some intuition into QM is game theory. And real life is full of these things. Games, stock markets and society. As is well known, there is not always a realist base for the value of company stocks, this value is rather collectively determined by the expectations of the players on the market. If everybody has the information that this company is doing good and will grow, the stock values rise regardless of the "real state of affairs". And to the stock market player sitting behind his desk speculating, these volatile expecation games is indistinguishable from the real thing! All players acts upone THEIR information, the information of the other players are not known.

They way I see HUP. Once you define the operator of momentum, the HUP follows from that definition and the rest of the axioms of QM. It's like usual wavedecomposition analogy. The Fourier components of a wavepacket are by construction infinitely extended in space, and from there follows that a localized wavepacket requires a large number of cordinated Fourier components.

That's the obvious part.

The question IMHO, is rather, what is so special about the Fourier transform relating two different distributions in x to p space? Or how come that the axioms the build QM, are so successful in describing reality? How was these axioms emergent?

One problem I am trying to solve is more focued in finding the question, to which the Fourier transform is the unique answer. I expect it's an optimation problem. Consider that an observer has limited information capacity, then clearly if they can find a transformation that allows them to be more predictive w/o expanding their memory that seems like a good evolutionary trait? I think this is already strongly correlated to the emergence of the complex amplitude formalism, and the origin of the "superposition statistics" that is really what makes QM different.

I expect to understand how the "momentum space" spontaneously is formed from the "configuration space" and the data from this two views - realted by a particular transformation - results in non-commutativity since you can not both encode the original data AND the transformed data (due to limited representation capacity). The nature of the information feed into the system (ie degree of periodicity) should determine which dominates. When the system is very massive relative to the observer, the uncertainty is not resolvable and we get to the classical domain.

Is there a deeper reason, why we the complex amplitude formalism works so well, whos insight could help take QM to the next level and understand it's connection to gravity?

This is what is the interesting question to me and I think the future holds some kind of answer, and to me the closest option is some new realational information theories. Ordinary QM is not fully a relational theory IMO, but it "should be". Perhaps that missing link is also why it's so hard to make sense of.

IMO, the interesting parts of this is really more than just interpretations. I think the problems to unify interactions and also gravity is even more inflamed by our own lack of deeper understanding of QM. So I think the proof of success or any point of view, or reinterpretation of QM is the one who finds the necessary advantages to also solve some real problems.

/Fredrik
 
  • #32
Bible Thumper said:
Also, I thought Heisenberg's Uncertainty Principle worked only because the photon that's doing the observing perturbs the particle (system), thus altering it. Is this idea correct?

Adding to what Fra already said:

No, that idea is definitely not correct. It was an older analogy that helped people picture the situation a long time ago, but it is not accurate.

The uncertainty is NOT imparted by the measuring apparatus, which is the implication. The uncertainty derives from the acquisition of knowledge, however received. That could be from observing an entangled twin, for example. If the uncertainty came from the apparatus, then how is it that any apparatus always creates uncertainty according to the HUP? Some apparati, such as polarizers, impart no change in momentum yet do alter the polarization. Once re-polarized, the photon is otherwise undisturbed! And now there is complete uncertainty in the diagonal (45 degree) orientation in accordance with the HUP.
 
  • #33
Bible Thumper said:
Why can particles only become entangled under carefully controlled conditions and only with the use of exotic crystals? Why can't we entangle particles with the ease with which we diffract particles?

The purpose of the "exotic" crystal is to provide a high output of entangled photon twins. "High" being a relative measure, as only a very very small percentage of input photons are down-converted. If you waited (and could look closely enough), you would actually see entangled pairs occasionally coming out of ordinary glass as well because they have a non-zero probability even there. The BBO crystals are cut to enhance down conversion at particular frequencies. They are somewhat akin to resonant cavities as an analogy, or perhaps acoustical harmonics. The key element to the entanglement is that there is conservation of total momentum, spin, etc.

Also, all virtual photon pairs are entangled, not that you could see them directly.

The point being that entanglement used for experiments requires favorable and controlled circumstances. Entanglement has been demonstrated in probably dozens of ways, on a wide variety of particle systems.
 

Similar threads

  • Quantum Physics
Replies
16
Views
2K
  • Quantum Physics
Replies
3
Views
1K
Replies
80
Views
4K
Replies
5
Views
1K
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
Replies
43
Views
4K
  • Quantum Physics
Replies
12
Views
2K
Replies
10
Views
3K
Replies
15
Views
3K
Back
Top