Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #1,381
DevilsAvocado said:
Okay, we probably misunderstand each other. Could you in simple English briefly describe how the Ensemble Interpretation explains what happens in an EPR-Bell experiment (let’s pretend it’s 100% perfect to avoid the logjam about loopholes etc)? And what is included in the "Ensemble"?
Well, first about source used in EPR-Bell experiments. Most common source used is Parametric Downconversion Crystal. It produces photon beams that consist of mixture of H/V and V/H photon pairs (or H/H and V/V in case of PDC Type I).

From basic laws about photon polarization we can conclude that if polarizer is perfectly aligned with say H photon polarization axis then all H photons will go through but all V photons will be filtered out.
But if polarizer is at 45° in respect to H photons then half of H photons and half of V photons are going through. So it means that photon polarization does not play any role in determining whether it will go through or will be filtered in 45° case.
However in this 45° case we have some other "thing" that allows us to measure correlations between photons of the same pair.

From perspective of Ensemble Interpretation this "thing" is not property of individual photon but some relation between photons from different pairs. One common example for such a "thing" would be phase. Obviously we can say something about phase of some oscillator only when we compare it with another oscillator that oscillates at the same frequency.
Now to have some measurement of phase we have to combine two photons in single measurement so that they can interfere constructively or destructively and measurement will give "click" in first case and will give no "click" in second case.

However if we get "click" in all cases when we get photon (100% efficiency) there is no way how we can obtain some information about their relative phase. Even more - when detector produces a "click" it's state will be reset to some initial (random) state and interference between two photon arriving one after another can not form.
So while in case of 100% efficiency we can have correlations for polarization measurement at 0° or 90° we can't have correlations for +45° or -45° measurement in this case.

Another way to look at this is that entanglement QM statistics are observable only when we combine polarization measurement with some other measurement of different type. But pure polarization measurement produces only product state statistics (probability at angle "a" x probability at angle "b").


I would like to add to this description that in order to produce correlations with this other measurement after polarizer it has to be that polarization measurements at +45° and -45° change this other "thing" (say phase) in antisymmetric way. Say relative phase between H and V modes changes in opposite ways if we compare +45° and -45° polarization measurements in counterfactual manner.

I hope I described my viewpoint clearly enough.
 
Physics news on Phys.org
  • #1,382
zonde said:
I hope I described my viewpoint clearly enough.

zonde, I’m only a layman, and I am not saying this to be rude, but with all due respect – I think you may have missed the very core of Bell's Theorem and EPR-Bell experiments.

This is the point I was trying to address earlier:
zonde said:
From perspective of Ensemble Interpretation this "thing" is not property of individual photon but some relation between photons from different pairs.

According to QM; it’s all about probability and statistics. I think that we all can agree that the probability and statistics of throwing dice is not dependent on the context or situation, right? If I throw a dice 1000 times in a row at home, I will get the same statistics as if we gathered 1000 PF user at different global locations throwing a dice once, and then collective check the statistics. Do you agree?

Now, my point is that if we run a "collective" EPR-Bell experiment in the same way as the "1000 PF users", we should of course get the same QM statistics as in one single EPR-Bell experiment. Do you agree?

My crucial conclusion of above is: The Ensemble Interpretation is going to run into some severe difficulties with the "1000 PF users" example, since there is no ensemble present in one single entangled pair, and still we will get the violation of the local realistic inequality when we compare the collective statistics of the 1000 single entangled pairs.

I guess you will not agree with my last conclusion, but I can’t see how you could explain this with the Ensemble Interpretation?

zonde said:
From basic laws about photon polarization we can conclude that if polarizer is perfectly aligned with say H photon polarization axis then all H photons will go through but all V photons will be filtered out.
But if polarizer is at 45° in respect to H photons then half of H photons and half of V photons are going through. So it means that photon polarization does not play any role in determining whether it will go through or will be filtered in 45° case.
However in this 45° case we have some other "thing" that allows us to measure correlations between photons of the same pair.

Here I think you are missing to whole point. When the polarizers are perfectly aligned parallel, even I can write a simple little computer program that proves Local Realism by the means of predefined LHV's. All I have to do is to (randomly) predefine the perfect correlations (1,1) or (0,0). No problem.

And in the case of 45º it’s even simpler. There is no correlation what so ever – it’s always 100% random! You couldn’t prove anything at this angle, could you!:bugeye:?

zonde said:
So while in case of 100% efficiency we can have correlations for polarization measurement at 0° or 90° we can't have correlations for +45° or -45° measurement in this case.

I’m totally confused?? If we skip the 'faultiness' on parallel and perpendicular, are you saying that an EPR-Bell experiment with 100% efficiency cannot produce the statistics we see today!?:eek:!?:bugeye:!?:confused:!?

(If this is what you are saying, it must be the most mind-blowing comment so far in this thread...)

zonde said:
I would like to add to this description that in order to produce correlations with this other measurement after polarizer it has to be that polarization measurements at +45° and -45° change this other "thing" (say phase) in antisymmetric way. Say relative phase between H and V modes changes in opposite ways if we compare +45° and -45° polarization measurements in counterfactual manner.

As I already said, 45º is totally disqualified as a decisive factor due to its 100% randomness. It won’t tell us anything about the Ensemble Interpretation, LHVT or Bell's Theorem.

Let’s instead take one of the simplest proofs of Bell's Inequality, by Nick Herbert:
If both polarizers are set to , we will get perfect agreement, i.e. 100% matches and 0% discordance.
13z71hi.png

To start, we set first polarizer at +30º, and the second polarizer at :
16jlw1g.png

If we calculate the discordance (i.e. the number of mismatching outcome), we get 25% according to QM and experiments.

Now, if we set first polarizer to , and the second polarizer to -30º:
106jwrd.png

This discordance will also naturally be 25%.

Now let’s ask ourselves:

– What will the discordance be if we set the polarizers to +30º and -30º??
2zjm5jk.png

If we assume a local reality, that NOTHING we do to one polarizer can affect the outcome of the other polarizer, we can formulate this simple Bell Inequality:
N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)

The symbol N represents the number of discordance (or mismatches).

This inequality is as good as any other you’ve seen in this thread.

(The "is less than or equal to" sign ≤ is just to show that there could be compensating changes where a mismatch is converted to a match.)

We can make this simple Bell Inequality even simpler:
50% = 25% + 25%

This is the obvious local realistic assumption.

But this wrong! According to QM and physical experiments we will now get 75% discordance!
sin^2(60º) = 75%

Thus John Bell has demonstrated by the means of very brilliant and simple tools that our natural assumption about a local reality is by over 25% incompatible with the predictions of Quantum Mechanics and physical experiments.​


How are you going to explain this with the Ensemble Interpretation? If we run 1000 separate single experiments at (+30°, -30°) to verify the QM prediction of 75% discordance??

THERE IS NO ENSEMBLE!?
 
Last edited:
  • #1,383
DevilsAvocado said:
Wow!:eek:! Wheeler and Feynman did struggle with this!? (now I have to buy two bottles :biggrin:) Pardon a layman, but what is "direct action"? Is it a part of the time-symmetric http://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorber_theory" ?

"Direct action" means all sources are connected to sinks so there are no "free field" contributions to the Lagrangian. That doesn't mean Wheeler and Feynman thought there were no photons -- I'm not really sure what their motives were for using this approach.

In Relational Blockworld, we have a mathematical rule for the divergence-free graphical co-construction of sources, space and time, that's why we don't have free fields in our action.

DevilsAvocado said:
Hehe, kinda understand the speaker :smile: ... "we don't have photons" ... huh?:rolleyes:?

Exactly. It wasn't my idea, but I'm just as crazy for using it as those who proposed it (Bohr, Ulfbeck, Mottelson, Zeilinger) :smile:

In its defense, it's a very powerful means of dismissing lots of conceptual issues in quantum physics (QM and QFT), but it does entail corrections to GR -- you can imagine that "direct connections" are fine in flat spacetime, but in curved spacetime between sources at distances where curvature is significant, this idea won't marry up with GR.

DevilsAvocado said:
This is very interesting. I can see that you have a lot of work to do. Modifying GR is probably not an easy task. Is this the http://en.wikipedia.org/wiki/Two-body_problem" you are working on? (Edit: Below is of course the 2-body orbital problem, sorry... :redface:)

[PLAIN]http://upload.wikimedia.org/wikipedia/commons/0/0e/Orbit5.gif[/QUOTE]

Yes, that's the orbital problem we have to solve. I can't begin to tell you how much more complicated the math for Regge calculus is than simply solving Newtonian gravity or even GR numerically. So, we're just trying to do the case where the two bodies free fall directly towards one another first.
 
Last edited by a moderator:
  • #1,384
Can one embed in spacetime a geometry which manifests the quantum mechanical observations of all Bell-type experiments therein?
 
  • #1,385
DevilsAvocado said:
Now, my point is that if we run a "collective" EPR-Bell experiment in the same way as the "1000 PF users", we should of course get the same QM statistics as in one single EPR-Bell experiment. Do you agree?
Strange but I believe I have states clearly enough this in my previous posts.
No, I disagree!

This discussion is not going anywhere if I will have to state it in every reply to you that I disagree about outcome of "collective" EPR-Bell experiment consisting of individual experiments with single pair.

This is similar to "collective" double slit experiment involving individual experiments with single particle. Even from orthodox QM perspective this question will be quite dubious because you need coherent source of photons to observe interference. But there is no coherence for single photon.
 
  • #1,386
Loren Booda said:
Can one embed in spacetime a geometry which manifests the quantum mechanical observations of all Bell-type experiments therein?

Our contention with Relational Blockworld is that a causally local but nonseparable reality solves all the QM "weirdness."

[See “Reconciling Spacetime and the Quantum: Relational Blockworld and the Quantum Liar Paradox,” W.M. Stuckey, Michael Silberstein & Michael Cifone, Foundations of Physics 38, No. 4, 348 – 383 (2008), quant-ph/0510090 (revised December 2007).

“Why Quantum Mechanics Favors Adynamical and Acausal Interpretations such as Relational Blockworld over Backwardly Causal and Time-Symmetric Rivals,” Michael Silberstein, Michael Cifone & W.M. Stuckey, Studies in History & Philosophy of Modern Physics 39, No. 4,
736 – 751 (2008).]

However, this interpretation implies a fundamental theory whereby the current "spacetime + matter" is to be replaced by "spacetimematter," e.g., one consequence of this view is that GR vacuum solutions are only approximations. So, it's incumbent upon us to produce this "theory X" (as Wallace calls it) and that's what we're working on now. To see our current attempt at how this might work, see Figures 1-4 of arXiv 0908.4348.
 
  • #1,387
zonde said:
Strange but I believe I have states clearly enough this in my previous posts.
No, I disagree!

Sorry, my fault. I will not ask about this again. I get your point now.

zonde said:
This discussion is not going anywhere if I will have to state it in every reply to you that I disagree about outcome of "collective" EPR-Bell experiment consisting of individual experiments with single pair.

Yes, we disagree on this, and this is the whole point. Although I'm sure that I can prove to you that your assumption is wrong, thus I will also show that the Ensemble Interpretation is wrong (unless you have missed something in your explanation).

Let me ask you: What is the time-limit (between the pairs) for you to consider a stream of entangled photons an "Ensemble"? Is it 1 nanosecond, 1 microsecond, 1 millisecond, 1 second, or what??

There must clearly be some limit (according to you), since you have stated that coherence is lost for a "single pair", and then the EPR-Bell experiment will fail.

So, what's the difference in time between two "single pairs" and two "coherent pairs" in an "Ensemble"??

zonde said:
This is similar to "collective" double slit experiment involving individual experiments with single particle. Even from orthodox QM perspective this question will be quite dubious because you need coherent source of photons to observe interference. But there is no coherence for single photon.

And here is where you got it all wrong. Your view is the old classical view on interference, where the effect originates from several photons interfering with each other. But this is proven wrong. The interference originates from one wavefunction of one photon interfering with itself! As the Nobel Laureate http://en.wikipedia.org/wiki/Paul_Dirac" states:
http://en.wikipedia.org/wiki/Photon_dynamics_in_the_double-slit_experiment#Probability_for_a_single_photon"
...
Some time before the discovery of quantum mechanics people realized that the connexion between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.

— Paul Dirac, The Principles of Quantum Mechanics, Fourth Edition, Chapter 1

I guess your last hope is to say that Paul Dirac was wrong and that you are right, but then you run into next problem - physical proofs. This video by Akira Tonomura at Hitachi Ltd shows a double slit experiment involving individual electrons distributed as an interference pattern:
https://www.youtube.com/watch?v=
<object width="640" height="505">
<param name="movie" value="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca"></param>
<param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="505"></embed>
</object>

As you can see, you are obviously wrong, and we could of course extend the time between every electron to 1 second, or 1 minute, or 1 hour, or 1 day, or 1 month, and still get exactly the same result as above!

This double slit experiment could of course also be distributed at different geographic locations, and when we later assemble the individual results, we would of course get the same collective picture as above. It's exactly the same mechanism as throwing dice - probability.

Even if you disagree, you can’t deny physical proofs can you...
 
Last edited by a moderator:
  • #1,388
RUTA said:
"Direct action" means all sources are connected to sinks so there are no "free field" contributions to the Lagrangian. That doesn't mean Wheeler and Feynman thought there were no photons -- I'm not really sure what their motives were for using this approach.

In Relational Blockworld, we have a mathematical rule for the divergence-free graphical co-construction of sources, space and time, that's why we don't have free fields in our action.

This is really hard for me... I can only guess my way thru "the haze of complexity"... I guess what you are saying is that if we have no photons, then naturally also the force carrier of one of the four fundamental interactions, electromagnetism, also has to go, and this must in some (very strange to me) way be replaced by a "Direct action", right?? (And that also goes for the rest of the 3 fundamental interactions? :rolleyes:)

Very weird indeed, every part in physics should be affected by this... (if I’m correct)

RUTA said:
Exactly. It wasn't my idea, but I'm just as crazy for using it as those who proposed it (Bohr, Ulfbeck, Mottelson, Zeilinger) :smile:

In its defense, it's a very powerful means of dismissing lots of conceptual issues in quantum physics (QM and QFT), but it does entail corrections to GR -- you can imagine that "direct connections" are fine in flat spacetime, but in curved spacetime between sources at distances where curvature is significant, this idea won't marry up with GR.

Ohh yeah, "crazy" is the term... :smile:

RUTA said:
Yes, that's the orbital problem we have to solve. I can't begin to tell you how much more complicated the math for Regge calculus is than simply solving Newtonian gravity or even GR numerically. So, we're just trying to do the case where the two bodies free fall directly towards one another first.

I have serious trouble just understanding the metric on Minkowski space... so this is actually very easy for me to relate to... :redface:

Seriously, can one look on RBW as a "digitalization", or maybe "sampling", or just "quantization" of everything in nature (I’m thinking of the "blocks")? Just as digital music on a CD, or sounds on a sampler, is just small blocks of an original analog continuing signal, or is this totally wrong (and silly)...?:rolleyes:?

If I’m correct, will this (hopefully) be the "key" to the quantization of gravity (which is maybe the most "analog" and "continuing", in distance, we have).

Just some personal thoughts... or whatever... :rolleyes:
 
  • #1,389
Loren Booda said:
Can Planck black holes be shown to violate the Bell inequality?

I don’t know anything about Micro black holes, but you must have Quantum entanglement in some way to violate a Bell inequality.


...thinking more about it... this could maybe be a really cool way of "stealing" information from a Black hole by sending in one part of an entangle pair...?? :cool:
 
  • #1,390
DevilsAvocado said:
I don’t know anything about Micro black holes, but you must have Quantum entanglement in some way to violate a Bell inequality.


...thinking more about it... this could maybe be a really cool way of "stealing" information from a Black hole by sending in one part of an entangle pair...?? :cool:

I don't know what it is you'd entangle, and anyway, the information can't leave the event horizon. Even if they are String Theory Fuzzballs, the event horizon is still "no return"... the ultimate in decoherence.
 
  • #1,391
DevilsAvocado said:
Yes, we disagree on this, and this is the whole point. Although I'm sure that I can prove to you that your assumption is wrong, thus I will also show that the Ensemble Interpretation is wrong (unless you have missed something in your explanation).
How do you intend to do that without actual experimental results?

DevilsAvocado said:
Let me ask you: What is the time-limit (between the pairs) for you to consider a stream of entangled photons an "Ensemble"? Is it 1 nanosecond, 1 microsecond, 1 millisecond, 1 second, or what??

There must clearly be some limit (according to you), since you have stated that coherence is lost for a "single pair", and then the EPR-Bell experiment will fail.

So, what's the difference in time between two "single pairs" and two "coherent pairs" in an "Ensemble"??
You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.

DevilsAvocado said:
And here is where you got it all wrong. Your view is the old classical view on interference, where the effect originates from several photons interfering with each other. But this is proven wrong. The interference originates from one wavefunction of one photon interfering with itself! As the Nobel Laureate http://en.wikipedia.org/wiki/Paul_Dirac" states:

But look at your quote. Dirac says: "The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs."
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This matter is not so simple. Take a look for example at this Feynman quote: (there was discussion about this quote https://www.physicsforums.com/showthread.php?t=406161")
"It is to be emphasized that no matter how many amplitudes we draw, add, or multiply, our objective is to calculate a single final amplitude for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the wavefunction or amplitude is somehow associated with the photon. But these amplitudes are probability amplitudes, that give, when squared, the probability of a complete event. Keeping this principle in mind should help the student avoid being confused by things such as the "collapse of the wavefunction" and similar magic."

Why interpretations of QM can not do away with single bare photon in single world?
Why do we need superposition or pilot-wave or many worlds?
If I would have to give one single word for what is common in all these interpretations I will say that it's context of measurement.
 
Last edited by a moderator:
  • #1,392
zonde said:
How do you intend to do that without actual experimental results?


You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.



But look at your quote. Dirac says: "The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs."
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This matter is not so simple. Take a look for example at this Feynman quote: (there was discussion about this quote https://www.physicsforums.com/showthread.php?t=406161")
"It is to be emphasized that no matter how many amplitudes we draw, add, or multiply, our objective is to calculate a single final amplitude for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the wavefunction or amplitude is somehow associated with the photon. But these amplitudes are probability amplitudes, that give, when squared, the probability of a complete event. Keeping this principle in mind should help the student avoid being confused by things such as the "collapse of the wavefunction" and similar magic."

Why interpretations of QM can not do away with single bare photon in single world?
Why do we need superposition or pilot-wave or many worlds?
If I would have to give one single word for what is common in all these interpretations I will say that it's context of measurement.

Ignoring context and Interpretations of QM, I don't see how you can see anything like locality or realism with the violations of BI's. As for how a photon interferes with itself destructively, I would guess it would be a net loss of energy, but who knows. Does it matter? That doesn't really effect non-locality in the context of Bell. The bottom line is that the results of these experiments are incompatible with ANY LHV theory, and the only other local theory on offer is deBB, which personally I don't buy (although it's viable for now).
 
Last edited by a moderator:
  • #1,393
DevilsAvocado said:
This is really hard for me... I can only guess my way thru "the haze of complexity"... I guess what you are saying is that if we have no photons, then naturally also the force carrier of one of the four fundamental interactions, electromagnetism, also has to go, and this must in some (very strange to me) way be replaced by a "Direct action", right?? (And that also goes for the rest of the 3 fundamental interactions? :rolleyes:)

Yes, at the fundamental level there are no "forces." The notion of "force" has to do the deviation of a worldline (matter) from geodesy in a background spacetime. In our approach, spacetime and matter are fused into spacetimematter, and the WHOLE thing is co-constructed. It's like GR where you can view the ontology as free of gravitational force. The difference is that in GR you can have vacuum solutions, i.e., it's meaningful to talk about empty spacetime. In our approach, spatio-temporal distances are defined only between sources, so there is no vacuum solution. This solves problems with closed time-like curves in GR, btw.

DevilsAvocado said:
I have serious trouble just understanding the metric on Minkowski space... so this is actually very easy for me to relate to... :redface:

Seriously, can one look on RBW as a "digitalization", or maybe "sampling", or just "quantization" of everything in nature (I’m thinking of the "blocks")? Just as digital music on a CD, or sounds on a sampler, is just small blocks of an original analog continuing signal, or is this totally wrong (and silly)...?:rolleyes:?

If I’m correct, will this (hopefully) be the "key" to the quantization of gravity (which is maybe the most "analog" and "continuing", in distance, we have).

Quantization of "everything" is probably the best metaphor. We use our fundamental rule to yield a partition function over the spacetimematter graph (local and nonseparable). The probability for any particular quantum outcome (graphical relation evidenced by a single detector click) can be obtained per the partition function. Thus, sets of many relations center statistically around the most probable outcome and one obtains classical physics. So, we don't start with classical physics (local and separable) and "quantize it." We start with a quantum physics (local and nonseparable) and obtain classical physics in the statistical limit.
 
  • #1,394
zonde said:
How do you intend to do that without actual experimental results?

Nema problema! :wink:

zonde said:
You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.

And I have to admit that your answer is somewhat unclear...

Immediately you run into several difficulties. To start with, spontaneous parametric down-conversion in BBO crystals is due to random vacuum fluctuations, and it’s not a very effective process. One out of 106 photons converts into two entangled photons, one in a million.

Then you have coincidence counting, and time window, and delays occurring in the electronics and optics, in the experimental setup.

All this results in roughly 1.5 entangled pair per millisecond:
http://arxiv.org/abs/quant-ph/9810080"
Weihs, Jennenwein, Simon, Weinfurter, and Zeilinger

...
The total of the delays occurring in the electronics and optics of our random number generator, sampling circuit, amplifier, electro-optic modulator and avalanche photodiodes was measured to be 75 ns.
...
A typical observed value of the function S in such a measurement was S = 2.73±0.02 for 14700 coincidence events collected in 10 s. This corresponds to a violation of the CHSH inequality of 30 standard deviations assuming only statistical errors.

There goes your "correlated" picoseconds.

And I am extremely interested in your "memory". What is this? How does it work? To me it looks at least as spooky as non-locality... This physical "entity" must have enormous "computing power" – keeping track of every microscopic probability in the whole universe... and not only on every present probability, "it" must remember all it "did" in the past to produce the correct correlated data... in real-time without delays... How on Earth is this ever possible??

Another interesting problem: If Alice & Bob are separated by 20 km and big a stream of not entangled photons, mixed with a few entangled photons, are running towards their random polarizers and measuring apparatus, to be time tagged – how can your "memory" know if one specific photon is entangled or not? This is something that is established later when data from both Alice & Bob is compared.

For your "memory" to know this at the exact measuring moment – "it" would need TRUE FTL communication!?:bugeye:!?

Let’s admit, this doesn’t work, does it?

zonde said:
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave. When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected. Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.

Two_sources_interference.gif
 

Attachments

  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 431
  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 420
  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 440
Last edited by a moderator:
  • #1,395
DevilsAvocado said:
Nema problema! :wink:



And I have to admit that your answer is somewhat unclear...

Immediately you run into several difficulties. To start with, spontaneous parametric down-conversion in BBO crystals is due to random vacuum fluctuations, and it’s not a very effective process. One out of 106 photons converts into two entangled photons, one in a million.

Then you have coincidence counting, and time window, and delays occurring in the electronics and optics, in the experimental setup.

All this results in roughly 1.5 entangled pair per millisecond:


There goes your "correlated" picoseconds.

And I am extremely interested in your "memory". What is this? How does it work? To me it looks at least as spooky as non-locality... This physical "entity" must have enormous "computing power" – keeping track of every microscopic probability in the whole universe... and not only on every present probability, "it" must remember all it "did" in the past to produce the correct correlated data... in real-time without delays... How on Earth is this ever possible??

Another interesting problem: If Alice & Bob are separated by 20 km and big a stream of not entangled photons, mixed with a few entangled photons, are running towards their random polarizers and measuring apparatus, to be time tagged – how can your "memory" know if one specific photon is entangled or not? This is something that is established later when data from both Alice & Bob is compared.

For your "memory" to know this at the exact measuring moment – "it" would need TRUE FTL communication!?:bugeye:!?

Let’s admit, this doesn’t work, does it?



This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave. When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected. Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.

Two_sources_interference.gif

Postoji problem. Mislim da je problem, Zonde's approach and endlessly reductionist requirements. Nearly 90 pages and a couple people (not naming names) seems to be chasing their tails. Your .gif picture is very eloquent, and really doesn't it say it all?! What more is needed, a frying pan inset with "No LHV!" to beat some about the head? This fair sampling complaint, then offshoots (by another) to endless talk of Malus' Law, then back to demands for evidence that is self-evident, or on the other hand impossible (FTL verification without classical means). I'm ready to believe this is all going to end with Abbot & Costello asking "Who's on first?"
 
  • #1,396
Thanks for the informative replies RUTA. Some comments below:

1. I've begun a second, much slower, reading of your main RBW paper, arXiv 0908.4348. I have the feeling that it might turn out to be somewhat influential. Who knows. In any case, as I mentioned, insofar as I understand the rationale of your approach it does seem quite reasonable, even if certain (necessary?) formal aspects of it are somewhat alien to my, admittedly pedestrian, way of thinking. Hopefully, I'll be able to ask you some worthwhile questions about it in the future -- definitely in a different thread, and probably in a different subforum.

2. Wrt the OP of this thread, I was asking if you assume (not just wrt RBW, but in your general thinking as a theoretical physicist and natural philosopher) the observation-independent existence of an 'underlying reality'. I mean, do you think that this a reasonable inference from observations, or isn't it?

Wrt the OP, "action at a distance ... as envisaged by the EPR Paradox" entails that there's no underlying reality.

Keeping in mind that there's a difference between saying that there's no way of talking, objectively, about an underlying reality (or any subset thereof such as, say, a light medium), and that an underlying reality doesn't exist, then what would you advise readers like me to believe -- that no underlying reality exists independent of observation, or that it's reasonable to assume that an underlying reality independent of observation exists but there's just no way to objectively talk about it?

If it's assumed that no observation-independent underlying reality exists, then the answer to the OP's question is that "action at a distance ... as envisaged by the EPR Paradox" isn't just possible, it's all there is. Alternatively, if it's assumed that an observation-independent underlying reality exists (even if we have no way of objectively talking about it), then the answer to the OP's question is no, "action at a distance ... as envisaged by the EPR Paradox" isn't possible.

In the course of answering the OP's question, it's been suggested that violations of BIs (and GHZ inconsistencies, etc.) inform us that either an observation-independent underlying reality doesn't exist, or, if it exists, then it's nonlocal in the EPR sense. But nonlocality, "as envisiged by the EPR Paradox", entails that an observation-independent reality doesn't exist. So, the suggestion becomes, violations of BIs show that there is no reality underlying instrumental behavior. This would seem to render any and all 'interpretations' of the qm formalism as simply insipid exercises in the manipulation of agile terms.

Of course, there's another, more reasonable way to interpret BI violations -- that they're not telling us anything about the nature of reality, but rather that they have to do with how certain experimental situations might be formalised. In which case, the answer, in my view, to the OP's question is simply that there is, currently, no definitive answer to his question -- but that the most reasonable assumptions, based on what is known, entail that, no, it's not possible.
 
  • #1,397
Thanks for the thoughtful reply DA. I'm not sure I totally agree with (or maybe I don't fully understand) some of your points. Comments below:

DevilsAvocado said:
I agree; we all want the world to be logical and understandable. No one wants it to be horrible, incomprehensible or 'magical'. We want to know that it all works the way we 'perceive' it. We also want nature to be 'homogeneous' on all scales. It’s very logical and natural, and I agree.
Not strictly "'homogeneous' on all scales", keeping in mind that there do seem to be certain 'emergent' organizing principles that pertain to some physical 'regimes' and not others, but rather that there might be some fundamental, or maybe a single fundamental, dynamical principle(s) that pervade(s) all scales of behavior.

DevilsAvocado said:
But I think it could be a mistake... or at least lead to mistakes.
Sure, it could. But maybe not. Modern particle physics has proceeded according to a reductionist program -- in the sense of 'explaining' the macroscopic world in terms of properties and principles governing the microscopic and submicroscopic world. But there's another approach (also a sort of reductionism) that aims at abstracting dynamical principles that are relevant at all scales of behavior -- perhaps even reducing to one basic fundamental wave dynamic.

DevilsAvocado said:
A classical mistake is when one of the brightest minds in history, Albert Einstein, did not like what his own field equations for theory of general relativity revealed – the universe cannot be static.

Albert Einstein was very dissatisfied, and made a modification of his original theory and included the cosmological constant (lambda: ?) to make the universe static. Einstein abandoned the concept after the observation of the Hubble Redshift, and called it the '"biggest blunder" of his life.

(However, the discovery of cosmic acceleration in the 1990s has renewed interest in a cosmological constant, but today we all know that the universe is expanding, even if that was not Albert Einstein’s logical hypothesis.)
Einstein made a logical judgement, given what was known at the time, and then changed his mind given observational evidence of the expansion. It's quite possible that the mainstream paradigms of both fundamental physics and cosmology might change significantly in the next, say, 100 to 200 years.

DevilsAvocado said:
Another classical example is Isaac Newton, who found his own law of gravity and the notion of "action at a distance" deeply uncomfortable, so uncomfortable that he made a strong reservation in 1692.
Newton formulated some general mathematical relationships which accorded with observations. His reservation wrt to his gravitational law was that he wasn't going to speculate regarding the underlying reason(s) for its apparent truth. Then, a couple of centuries after Newton, Einstein presented a more sophisticated (in terms of its predictive accuracy) and more explanatory (in terms of its geometric representation) model. And, I think we can assume that GR is a mathematical/geometrical simplification of the fundamental physical reality determining gravitational behavior. Just as the Standard Model is a simplification, and qm is a simplification.

DevilsAvocado said:
We must learn from this.
I agree. And the main thing we learn from is observation. Relatively recent and fascinating cosmological observations have led to inferences regarding the nature of 'dark energy' and 'dark matter'. But, in keeping with the theme of your reply to my reply to nismaratwork, these apparent phenomena don't necessarily entail the existence of anything wholly unfamiliar to our sensory reality. Dark energy might be, fundamentally, the kinetic energy of the universal expansion. The apparent acceleration of the expansion might just be a blip in the overall trend. It might be taken as evidence that gravity isn't the dominant force in our universe. I'm not familiar with the current mainstream views on this.

ThomasT said:
Our universe appears to be evolving. Why not just assume that it 'is' evolving -- that 'change' or 'time' isn't just an illusion, but is real? Why not assume that the fundamental physical principles govern physical behavior at all scales?
If there's a fundamental physical principle, say, in the form of a fundamental wave dynamic, and if it makes sense to assume that it's present at all scales, then conceptualizing the boundary of our universe in terms of an expanding spherical (ideally) shell, the mother of all waveforms, so to speak, then the discovery of the cosmic-scale expansion becomes maybe the single most important scientific discovery in history.

And dark matter might be waves in a medium or media of unknown structure. Is there any particular reason to assume that wave behavior in media that we can't see is 'fundamentally' different from wave behavior in media that we can see? It might be argued that standard qm is based on the notion that the wave mechanics of unknown media is essentially the same as the wave mechanics of known media.

DevilsAvocado said:
I think that humans have a big "ontological weakness" – we think that the human mind is "default" and the "scientific center" of everything in the universe, and there are even some who are convinced that their own brain is greatest of all . But there is no evidence at all that this is the case (please note: I’m not talking about "God").
I certainly agree that this seems to be the general orientation. Whereas, the more scientifically sophisticated worldview would seem to be that what our sensory faculties reveal to us is not the fundamental 'reality'. Perhaps we're just complex, bounded waveforms, persisting for a virtual instant as far as the life of the universe as a whole is concerned -- or however one might want to talk about it.

DevilsAvocado said:
One extremely simple example is "human colors". Do they exist? The answer is No. Colors only exist inside our heads. In the "real world" there is only electromagnetic radiation of different frequency and wavelength. A scientist trying to visualize "logical colors" in nature will not go far.
Well, colors do exist. But, as you've noted, it's really important to specify the context within which they can be said to exist. We humans, and moons and cars and computers, exist, but these forms that are a function of our sensory faculties aren't the fundamental form of reality.

And the way that all of our sensory faculties seem to function (vibrationally) gives us another clue (along with quantum phenomena, and the apparent behavior of dark matter, etc.) wrt the fundamental nature of reality. It's wavelike. Particles and particulate media emerge from complex wave interactions. Now, wrt my statement(s), is there any reason to suppose that wave behavior in particulate media is governed by different fundamental dynamical principles than wave behavior in nonparticulate media? Of course, I have no idea.

DevilsAvocado said:
Have you ever tried to visualize a four-dimensional space-time?
I don't want to. I think that it's a simplification of underlying complex wave behavior.

DevilsAvocado said:
Or visualize the bending and curving of that 4D space-time??
No. But consider the possibility that 'gravitational lensing' is further evidence in favor of a wave interpretation of fundamental reality. (And keep in mind that insofar as we entertain the idea of a fundamental reality that exists whether we happen to be probing it or not, then we can't logically entertain the possibility of EPR-envisaged spooky action at a distance per the OP.)

DevilsAvocado said:
To my understanding, not even the brightest minds can do this?? Yes, it works perfectly in the mathematical equations, but to imagine an "ontological description" that fits "our experience"... is this even possible??
Sure, there's wave activity in a medium or media that we can't detect that's affecting the light.

DevilsAvocado said:
Yet, we know it’s there, and we can take pictures of it in the form of gravitational lensing on the large cosmological scale:
Does this fits your picture of a "logical reality"...?
Yes.

DevilsAvocado said:
I don’t think mainstream science claims the full understanding of EPR-Bell experiments, it’s still a paradox. What is a fact though is that either locality and/or realism have to go if QM is correct (and QM is the most precise theory we got so far): Bell's Theorem proves that QM violates Local Realism.
I agree that objective realism is a pipe dream. There's simply no way to know, definitively, what the underlying reality is or, definitively, how it behaves. It is, nonetheless, a wonderful speculative enterprise. And I do think that informed speculations about the nature of reality will help fundamental physics advance.

But if we opt for nonlocality, per EPR and the OP, then there is no underlying reality -- and I find that a very limiting and boring option.

DevilsAvocado said:
There seems to be some in this thread that for real thinks that Einstein would have stuck to his original interpretation of the EPR paradox, despite the work of John Bell and the many experimentalists who are verifying QM predictions and Bell's Theorem, time after time. I’m pretty sure that this would not have been the case. Just look at the cosmological constant and Hubble Redshift. Einstein changed his mind immediately. He did not start looking for "loopholes" in Hubble's telescope or any other farfetched 'escape' – he was a diehard empiricist.
Bell experiments are one thing. Interpretations of Bell experiments are quite another. Do they inform us about the nature of reality? How, especially when one interpretation is that BI violations tell us that an underlying reality doesn't even exist? And if that's the case, then what is there to 'discover'?

The acceptance that the cosmological expansion is real is much less problematic than the acceptance that there's no reality underlying instrumental behavior.

I don't know what the mainstream view is, but, if it's that qm and experiments are incompatible with the predictions of LR models of a certain form specified by Bell, then I currently agree with that. And the experiments tell us nothing about any necessary qualitative features of the reality underlying the instrumental behavior, except maybe that the correlation between detector behavior and emitter behavior would seem to support the assumption that there's something moving from emitter to detector, which would seem to support the assumption that there's an underlying real 'whatever', produced by the emission process, which exists prior to and independent of filtration and detection, which would support the contention that the correct answer to the OP's question is, no, "action at a distance ... as envisaged by the EPR Paradox" is not possible.

DevilsAvocado said:
We already know that there are problems in getting full compatibility between QM and GR when it comes to gravity in extreme situations, and EPR-Bell is just another verification of this incompatibility. If we try to solve the EPR-Bell situation as a "spookyactionatadistanceist" we get problems with SR and Relativity of Simultaneity (RoS) + problems with QM and the No-communication theorem. If we try to solve it as a "surrealist" (non-separable/non-realism) we get the problems RUTA is struggling with.

So this question is definitely NOT solved, and it’s definitely NOT easy.

But, let’s not make it too easy by saying the problem doesn’t exist at all, because there’s still QM-incompatible gravity dragging us down, and it will never go away...
I agree. And the QM-GR, RBW-GR formal problems are beyond my comprehension. However, this thread is (ok, it sort of was at one time) about answering the question, "Is action at a distance possible as envisaged by EPR?". And here's my not quite definitive answer to that:

If there's no underlying reality, then it's possible.
Experiments suggest that there's an underlying reality.
Therefore, it's not possible.

Or, in the words of Captain Beefheart:

The stars are matter,
We are matter,
But it doesn't matter.
 
  • #1,398
DevilsAvocado said:
This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave.

***
When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected.
***

Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.
Incredible!
You have got it!

So if we hypothetically detect all photons even those with miserable detection probability we loose any idea about interference pattern.
That's what I call unfair sampling but you can call it whatever way you want.

Forget about ensemble interpretation. I can explain it to you using orthodox QM in much simpler way.

Just to check that we are on the same line. Consider http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer" .

Mach-zender-interferometer.png


After we count all the phase shifts inside different mirrors Wikipedia says that: "there is no phase difference in the two beams in detector 1, yielding constructive interference." So detector 1 fires when photon arrives there (constructive interference) but detector 2 does not fire when photon arrives there (destructive interference).
So photons arrive at both detectors but because of interference one detector fires but the other don't.

Are we still on the same line here?
 
Last edited by a moderator:
  • #1,399
zonde said:
Consider http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer" .

Mach-zender-interferometer.png


After we count all the phase shifts inside different mirrors Wikipedia says that: "there is no phase difference in the two beams in detector 1, yielding constructive interference." So detector 1 fires when photon arrives there (constructive interference) but detector 2 does not fire when photon arrives there (destructive interference).

So photons arrive at both detectors but because of interference one detector fires but the other don't.

Most people would say that no photons arrive at detector 2.
 
Last edited by a moderator:
  • #1,400
RUTA said:
Most people would say that no photons arrive at detector 2.

After finally reading this entire thread, I feel confident saying that Zonde is not most people... :rolleyes:
 
  • #1,401
zonde said:
So if we hypothetically detect all photons even those with miserable detection probability we loose any idea about interference pattern.
That's what I call unfair sampling but you can call it whatever way you want.
...
Are we still on the same line here?

I’m afraid we don’t even agree on "what’s a line"... as I said, it’s extremely simple. If we block one of the slits, we will not get the interference pattern, with or without "unfair sampling". This is an undeniable fact that should make sense even to a 10-yearold.

The physical proofs are right in front of you nose, where you step by step can see with your own eyes what happens when the "sampling increases":

https://www.youtube.com/watch?v=
<object width="640" height="505">
<param name="movie" value="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca"></param>
<param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="505"></embed>
</object>
 
Last edited by a moderator:
  • #1,402
RUTA said:
Most people would say that no photons arrive at detector 2.
What most people will say about this Feynman quote?
"It is to be emphasized that no matter how many amplitudes we draw, add, or multiply, our objective is to calculate a single final amplitude for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the wavefunction or amplitude is somehow associated with the photon. But these amplitudes are probability amplitudes, that give, when squared, the probability of a complete event. Keeping this principle in mind should help the student avoid being confused by things such as the "collapse of the wavefunction" and similar magic."

Most people will make smart face but will think by themselves: "What the heck he is talking about? Why I should avoid magic? I love magic! It gives colors to world. It makes me feel special after all. If it's not magic I don't want to understand it at all."

Well first of all to solve the problem it should be recognized as a problem. If there is no problem there is nothing to solve. :biggrin:
 
  • #1,403
RUTA said:
Yes, at the fundamental level there are no "forces." The notion of "force" has to do the deviation of a worldline (matter) from geodesy in a background spacetime. In our approach, spacetime and matter are fused into spacetimematter, and the WHOLE thing is co-constructed.

This is interesting. Do you need to "redefine" any existent physical laws to make it all work...?

RUTA said:
Quantization of "everything" is probably the best metaphor.
... We start with a quantum physics (local and nonseparable) and obtain classical physics in the statistical limit.

This is very cool. You start with quantized blocks of everything, and then plug it into the "classical player", and out comes wonderful "analog" classical "music"! :cool:

If this works, I know one guy http://en.wikipedia.org/wiki/Nick_Bostrom" perfectly – it’s all small "digital" blocks of information! :smile:
 
Last edited by a moderator:
  • #1,404
gee, is this thread never going to end?

The answer is that spacetime is patchwork of manifolds mapping the underlying quantum Hilbert space via a holographic mechanism.

http://arxiv.org/abs/0907.2939

Lubos Motl explains it all here

Admittedly the details of the mechanism which gives rise to these emergent spacetime "patches" hasn't been fully worked out but it's only a matter of time...

So you can all stop arguing now. :wink: :biggrin:
 
  • #1,405
ThomasT said:
However, this thread is (ok, it sort of was at one time) about answering the question, "Is action at a distance possible as envisaged by EPR?". And here's my not quite definitive answer to that:

If there's no underlying reality, then it's possible.
Experiments suggest that there's an underlying reality.
Therefore, it's not possible.

Or, in the words of Captain Beefheart:

The stars are matter,
We are matter,
But it doesn't matter.

Not that it matter that much :smile:, but we could make it simple and say...

If QM is correct, then we are left with these options:
  • locality + non-realism

  • non-locality + realism

  • non-locality + non-realism

"When it is obvious that the goals cannot be reached, don't adjust the goals, adjust the action steps." -- Confucius :wink:
 
  • #1,406
unusualname said:
gee, is this thread never going to end?

NO! WHY??

(:smile: :smile: :smile:)
 
  • #1,407
nismaratwork said:
Postoji problem. Mislim da je problem

Nastrovje! :smile:

nismaratwork said:
What more is needed, a frying pan inset with "No LHV!" to beat some about the head?

This is probably the most accurate solution this far! :biggrin:

nismaratwork said:
I'm ready to believe this is all going to end with Abbot & Costello asking "Who's on first?"

HAHA! LOL! Let’s try it and see what happens! :smile:

https://www.youtube.com/watch?v=
<object width="640" height="505">
<param name="movie" value="http://www.youtube.com/v/wfmvkO5x6Ng&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0xe1600f&amp;color2=0xfebd01"></param>
<param name="allowFullScreen" value="true"></param>
<param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/wfmvkO5x6Ng&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0xe1600f&amp;color2=0xfebd01" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="505"></embed>
</object>
 
Last edited by a moderator:
  • #1,408
DevilsAvocado said:
This is interesting. Do you need to "redefine" any existent physical laws to make it all work...?

The only change to existing physics would be to understand GR as a separable approximation to spacetimematter.

DevilsAvocado said:
This is very cool. You start with quantized blocks of everything, and then plug it into the "classical player", and out comes wonderful "analog" classical "music"! :cool:

If this works, I know one guy http://en.wikipedia.org/wiki/Nick_Bostrom" perfectly – it’s all small "digital" blocks of information! :smile:

I don't know if our idea is in concert with his. Ok, that was bad :smile:
 
Last edited by a moderator:
  • #1,409
RUTA said:
I don't know if our idea is in concert with his. Ok, that was bad :smile:

Not bad, not bad at all, if you do not know what you are talking about (= me :smile:), the best you can do is try "Acting" in concert ... but most of the time the damned violin is broken (= terrible noise + bleeding fingers :biggrin:) ...
 
  • #1,410
zonde said:
Incredible!
You have got it!

So if we hypothetically detect all photons even those with miserable detection probability we loose any idea about interference pattern.
That's what I call unfair sampling but you can call it whatever way you want.

Forget about ensemble interpretation. I can explain it to you using orthodox QM in much simpler way.

Just to check that we are on the same line. Consider http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer" .

Mach-zender-interferometer.png


After we count all the phase shifts inside different mirrors Wikipedia says that: "there is no phase difference in the two beams in detector 1, yielding constructive interference." So detector 1 fires when photon arrives there (constructive interference) but detector 2 does not fire when photon arrives there (destructive interference).
So photons arrive at both detectors but because of interference one detector fires but the other don't.

Are we still on the same line here?

NO: where the interference is destructive, no photon arrives in the sense of detection, the only valuable meaning here. On the other side, the photon can be detected (and will be if the detection is perfect). The problem is that such counting (in more general context) may hide something else than what one wants to illustrate. I propose to modify the MZ to adapted diffusers on both branches. Then the overlap of the 2 sets of classical path has positive measure, as in a two slit experiment and one can use a scree or an array of detectors. Many misinterpretations of experiments with delay (delayed measurement and/or delayed erasure) originate in the failure of defining PRECISELY what one means by interference and by using counters and technique from fourth order interferences in a context where second order is what is the issue, creating misinterpretations that lead to confusion and otherwise respectable people speaking of actions backward in time.

Black magic has invaded physics, and in recent posts I have seen again people imagining that Aspect and other similar experiments establish non-locality.
Again, the hypothesis to have Bell inequalities meaning anything of interest in physics is that one has Realism+Locality. For the experiments to have something to have anything to do with Bell theory, one must assume Realism+Locality+Fair Sampling (say R+L+FS).
Now the incompatibility of the experimentally verified quantum correlation (a twisted form of Malus' law) with the mathematically trivial Bell inequalities "prove" in the sense of physics that (R+L+FS) is false. Hence R+L+FS is false, meaning that one at least of
R, L, and FS is false.

Since QM tries to tell us in many way that realism in the form needed by Bell Theory (or in this discussion in particular) is false (something share starting 19930 by Copenhagen and by Einstein and Shrödinger -for whom both, at least Einstein for sure, realism could only be with new, yet unknown variables that contrary to what Bell used , would not permit to give simultaneous values to mutually incompatibles observables as used so far) there is no reason to consider an hypothesis as black-magical as non-locality, an hypothesis that is furthermore such that is cannot get to be verified and that violates the spirit of special relativity that has cured the well known stupidity of instant action (more violent than action at a distance and something that trouble Newton and many others, but fortunately not enough the appropriate corrections would have been well known and testing General relativity would have been harder).

Otherwise stated, if realism is false (as von Neumann thought he had proven in a weak form, but with a false proof while he had a better proof -the one he liked- who appeared much later in Wigner's paper on Bell'inequality, but physicists do not need proof and should not as physics is not the realm of proofs (which is ,ath including logic), and the truth or not of von Neumann's argument was not to trouble people like Bohr, Dirac, Einstein, etc... and even Shrödinger who did read von Neumann's book, as did Dirac I presume (? someone knows for sure: reference please)), there is no place for non-locality by Occam's razor, at least.

Another thread, recently launched by DrC to my request as I do not know how to do and he is well known in these columns, treats the issue of whether Bell's Theorem can be establish without locality. I have posted some pre/re-prints that I have there. But I am saddened to see so many physicists trapped in the maze of misinformation reading them to believe that QM is non local and the nature as well. Bell has really succeeded in mixing people up[ here. He did support both realism and non-locality. While for Wigner, Bel''s theorem is the best proof known till his time that HVs do NOT exist (and again, without HV, or whatever form of microscopic realism, why on Earth would one invoke something as baroque as non-locality? There is enough difficulty in the laws of nature to not have to invoke crazy hypotheses to give the sauce a better taste. I like my food extra hot, and find physics even hotter even without assuming that the setting of an instrument changes the output of another instrument at the other end (so to speak) of the Universe. The real physics of that is already difficult enough to understand.
 
Last edited by a moderator:

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K