I Is this popular description of entanglement correct?

  • Thread starter Thread starter entropy1
  • Start date Start date
  • Tags Tags
    entanglement
  • #121
Sunil said:
An excuse for not allowing the use of your independence assumption in Bell tests.
Either my explanation is true or it is not. I think the word "excuse" here is used to avoid accepting that my explanation is perfectly valid.

Sunil said:
But let's look at the next excuse which you have to present for the experiment where the direction of the detectors are defined by starlight arriving shortly before the measurement from the other side than the particle measured at that detector. There was a real experiment with this. Instead of starlight, I would prefer CMBR radiation coming from this other side. So, the event which has created these photons has not been in the past light cone of the preparation of the pair.
1. The model I proposed is in term of classical EM. The Big-Bang, inflation period and all that cannot be described in terms of this model. So, let's stay in a regime where this model makes sense.

2. I agree that if you can prove that "the event which has created these photons has not been in the past light cone of the preparation of the pair" SD is dead. The question is, can you?

Sunil said:
BTW, if there is a singularity in the past - and according to GR without inflation, as well as to GR with inflation caused by a change of the vacuum state, there has to be a singularity - then there is a well-define and finite horizon of events which have a common event in the past with us. This horizon can be easily computed in GR, and in the BB without inflation it was quite small, so that the visible inhomogeneities visible in the CMBR where greater than this horizon size.
As far as I know there is no theory at this time that is capable of describing the Big-Bang. So all this is pure speculation.

Sunil said:
This problem was named "horizon problem". Inflation solves it FAPP by making it greater than what we see in the CMBR. But it does not change the fact that those events we see in the CMBR coming from opposite sides are causally influenced by causes farther away in those directions, and all we have to do is to go so much far away searching for those causes that we will end up with causes in the opposite directions which have nothing in their common past. So, each of the two causes can influence (if Einstein causality holds) only one of the detectors, and not the preparation procedure.
Can you please specify the conditions at the Big-Bang? Was the Big-Bang a deterministic process or not? if it is described by GR it should be, right? Did correlations exist in the pre-Big-Bang state? What evidence we have for that?

Sunil said:
As before, I'm sure you will find an excuse.
I am not going to accept a bunch of assumptions with no evidence behind them. If you present a coherent theory of the Big-Bang I'll look into it and see if an "excuse" is to be found.

Sunil said:
This is what normal science, with the rejection of superdeterminism, is assuming.
I don't think so. "Normal science" uses a certain model. The conclusions only apply if the model is apt for the experiment under investigation. For example, the kinetic theory of gases apply for an ideal gas. It works when the system is well approximated by such a model. If your gas is far from ideal you don't stubbornly insist on this model, you change it. In the case of Bell's theorem the model is Newtonian mechanics with contact forces only. Such a model is inappropriate for describing EM phenomena, even classical EM phenomena like induction. So, it is no wonder that the model fails to reproduce QM.
Sunil said:
The statistics remain stable, namely the interesting variables which do not have sufficiently simple causal explanations for their correlations will remain independent.
"Sufficiently simple" is a loaded term. And i didn't claim that the statistics does not remain stable in this case, I just don't know. If you are right and the function behaves well, great. We will be able to compute the classical EM prediction for a Bell test. When such computation is done we will see if it turns out right or wrong.
Sunil said:
But for various pseudorandom number generators such independence proofs are known. I even remember to have seen the proof for the sequence of digits of ##pi##.
I don't get the your point about PI. Clearly, the digits of PI are not independent since they are determined by a quite simple algorithm. Two machines calculating PI would be perfectly correlated.

Sunil said:
Ah, I see, this is what you have meant with "above". Nice trick, given that (I think) you know that it is quite difficult to prepare entanglement states for macroscopic bodies in a stable way.
IF Bell correlations are caused by long-range interactions between the experimental parts one should be able to prepare macroscopic entangled states. I am not aware of any attempt of doing so.
Sunil said:
So your claim that superdeterministic theories may be falsifiable is bogus. That you think about such potentiality does not make that theory superdeterministic.

Clearly, all theories with long-range interactions are falsifiable. Classical EM, GR, fluid mechanics have been tested a lot. I have laid out my argument why such theories could be superdeterministic. We will know that when the function is computed. Until then you cannot rule them out.

Sunil said:
You get what usual science assumes - independence if there is no causal justification for a dependence.
But there is a causal justification. There is a long-range interaction involved that determines the hidden variable. "Normal" science does not assume independence when this is the case.

Sunil said:
This independence assumption (the zero hypothesis) is clearly empirically falsifiable, and if it is falsified, then usual science starts to look for causal explanations. And usually finds it. ("Usually" because this requires time, so that one has to expect that there will always cases where the search was not yet successful.)
As far as i can say, the independence assumption was falsified by Bell tests + EPR argument. Locality can only be maintained if the independence assumption fails. And no violation of locality was ever witnessed in "normal science", right?

Sunil said:
This is as probable as that all the atoms of a gas concentrate themselves in one small part of the bottle.
And this spontaneously happens when the non-interaction assumption (approximately true for a gas far from its boiling point) fails when the gas is cooled. Exactly my point.

Sunil said:
The cooperation for planetary systems is already predicted by very rough approximations...
There was a time when no suitable model existed and no such approximations were possible. We could very well be at this stage with entanglement.

Sunil said:
False logic. If I don't know the explanation, there may be one. But it is as well possible that there is none, thus, a violation of the common cause principle. Your "so there is no" obviously does not follow.
There is nothing wrong with my logic. If you can't prove a violation (and you can't) you cannot just assume one.

Sunil said:
We have the large experience of humankind with the successful application of the common cause principle. Essentially everybody identifies correlations in everyday life and then tries to find explanations. This fails if the correlations are not real, but statistical errors. But many times causal explanations will be found. If it would be violated in reality, it would have been detected long ago.
I agree. This is why I find SD a natural choice. It explains the correlations in terms of past causes. The other possible explanation is non-locality, a behavior which was never witnessed.

I think you forget the really important point that without SD you have non-locality. Your arguments based on what "normal science" assumes or not do not work for this scenario, since, when you factor in the strong evidence for locality, the initial probability for a violation of the statistical independence is increased many orders of magnitude.
 
Physics news on Phys.org
  • #122
DrChinese said:
Note that for 't Hooft's reference, he is quoting... himself! (Just as you seem to do.) And he claims this to be a derivation of QM.
Yes, he is quoting himself because he invented the model. What's wrong with that?

DrChinese said:
And from his reference, which is about "fast" variables (which I am not aware as part of any standard model), he tells us:

"...we first assume the existence of very high frequency oscillations. These give rise to energy levels way beyond the regime of the Standard Model."

How about we assume the existence of very small turtles? "It's turtles all the way down..." Or how about we assume the universe was created last Thursday, and our memories of an earlier existence are false (Last Thursdayism).
't Hooft's model is an existence proof that local, deterministic theories could reproduce QM. I did not claim that his model is a true replacement for the Standard Model.

DrChinese said:
Basically: you can't reference another author referencing himself with a completely speculative set of hypotheses/assumptions that dismiss Bell, and then say "look what I proved".
Why not? When I will see your rebuttal published I'll change my mind.

DrChinese said:
2. I accept that any model, deterministic or not, has a restricted number of initial states. What is missing (among other things) is i) a causal connection between a) the entangled particle source(s) and b) the many apparati determining the measurement context;
Not "any" model. In Newtonian mechanics with contact forces you can arrange the initial state in any way you want. There is no rule that says that given the position and velocity of particle 1 you need to restrict the position and/or velocity of particle 2 in any way. Not so in field theories. In classical EM you can, like in the Newtonian mechanical case arrange the initial positions/velocities in any way you want. But you can't do that for the fields. The fields at particle 1 are uniquely determined by the global distribution/momenta of charges and those fields will determine how particle 1 moves (via Lorentz force). Since Bell's theorem disregards this constraint (in the form of independence assumption) one cannot rely on the valability of the so-called "classical" prediction in this case.

DrChinese said:
and ii) a viable description of the mechanism of how that causal connection between a) and b) coordinates to obey the quantum mechanical expectation values locally.
The only mechanism is the restricted number of the initial states. One simply needs to calculate the prediction of the theory while taking into account that restriction.

DrChinese said:
An important note about the sources of entangled particles per a) above. The entangled particles can come from fully independent laser sources, without having ever been present in the same light cone. Now, I am perfectly aware that within a purported Superdeterministic model, everything in the observable universe lies in a common light cone and therefore would be "eligible" to participate in the anti-Bell conspiracy. But now you would be arguing:

There exist particular photons, from 2 different lasers* that also end up being measured at the proper angles (context) for a Bell Inequality violation, but only when statistically averaged (even though there is complete predetermination of each individual case); and "somehow" these two photons each carry a marker of some kind (remember they have never been in causal contact - so the laser source must have passed on this marker to each photon at the time it was created) so that it "knows" whether to be up or down - but only in the specific context of that a measurement setting that can be changed midflight according to a pseudo-random generator for the setting - that can itself have involvement by any number of human brains and/or computers.

So, where did any of this get explained other than by a general purpose "suppose that"? Because there is no known physics - EM, QM or otherwise - that could serve as a base mechanism for any of the above to support the wild assertions involved in SD.
The restricted number of initial states applies for any system. It could be the whole universe if you like, but then you could not make the required calculations. Making the experiment bigger and more complex does not change the qualitative aspect that some states cannot be prepared because there is no initial state that evolves into them.

DrChinese said:
3. I wasn't referring to free will in mentioned measurements of c, gravitiation, or any other constant. I was referring to the fact that the ONLY scenario where Superdeterminism is a factor is in Bell tests.
This is also true for non-locality. We don't need to assume it anywhere else.

DrChinese said:
Apparently, the universe is just fine at revealing its true nature without a need to "conspire" at everything else.
Apparently, the universe is also local. Why make an exception here?

DrChinese said:
Imagine that we are measuring the mean lifetime of a free neutron as being 880 seconds. But then you tell me it's really 333 seconds.
I would not tell you that. At no point I doubt that the results of the Bell test are what they are. They are true and statistically representative. It's only that the statistics in the case of a theory with long-range interactions is different from Newtonian mechanics.

DrChinese said:
Your explanation is: It's just that the sample was a result of initial settings, and those initial settings led to an unfair sample.
The sample is perfectly fair for the theory under investigation (say EM). It's not fair for a different theory with different equations, like Newtonian mechanics, but why would you expect that?

DrChinese said:
By analogy, that is much like the SD hypothesis that the local realistic value of my Bell test example must be at least .333, although the observed value is .250. Why do you need a conspiracy to explain the results of one scientific test, but no others?
Your fallacy is to assume that any local realistic theory should predict the same thing as Newtonian mechanics. Try to play pool with charge/magnetized balls. Check and see if the probability of placing the balls in a certain pocket is the same. I would expect it to be different. The balls move differently. There is no conspiracy.

DrChinese said:
4. Glad you agree. A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously.
EPR assumed that indeed. I think they were wrong. Elements of reality do exist for the unmeasured spin components but they are different from what EPR expected.

DrChinese said:
*See for example:
High-fidelity entanglement swapping with fully independent sources
https://arxiv.org/abs/0809.3991
What is your point with this paper?
 
  • #123
PeterDonis said:
It does on any interpretation of QM except one that views QM as just a statistical model over some underlying deterministic physics, where the statistics and probabilities have the standard classical ignorance interpretation. The latter interpretation of QM seems to be one of the least popular ones.
By QM I mean QM's postulates, those 7 rules. Obviously, each interpretation rejects all other interpretations, this is not a particularity of SD.
 
  • #124
WernerQH said:
What could this possibly mean? It is rather obvious that you have never studied any of Aspect et al.'s papers.
It is not sufficient for your utterings to sound logical. They should also make sense.
If entanglement is an effect of long-range interactions between the experimental parts it follows that it should be possible to reproduce such states at macroscopic level. Does this make sense to you?

What's your point with Aspect's papers?
 
  • Like
  • Haha
Likes vanhees71 and WernerQH
  • #125
AndreiB said:
What's your point with Aspect's papers?
Read them!
 
  • Like
Likes vanhees71
  • #126
WernerQH said:
Read them!
I did. there is nothing there about macroscopic entangled states. They use photons. photons are not macroscopic. What's your point, again?
 
  • Like
Likes vanhees71
  • #127
AndreiB said:
Either my explanation is true or it is not. I think the word "excuse" here is used to avoid accepting that my explanation is perfectly valid.
Your "explanations" contain two elements which contradict each other. On the one hand, you use typical common sense reasoning what some interactions are not strong enough to have an influence, on the other hand you use superdeterminism where even the smallest imaginable modification would destroy the whole construction by destroying the correlation. One would be valid in a normal world, the other one in that superdeterministic world. Using both together would be inconsistent.
AndreiB said:
1. The model I proposed is in term of classical EM. The Big-Bang, inflation period and all that cannot be described in terms of this model. So, let's stay in a regime where this model makes sense.
Photons flowing from far away toward the devices, and detectors of such photons which would turn the spin measurement detectors as necessary, can be described by classical EM (a photon can be simply described by a particular classical solution fulfilling the quantization condition). So, start the computation a second before the initialization, the measurement done a second after the initialization, with those photons possibly modifying the detectors angles being 1.9 light seconds away from their target detectors if they exist.

AndreiB said:
2. I agree that if you can prove that "the event which has created these photons has not been in the past light cone of the preparation of the pair" SD is dead. The question is, can you?
In GR without inflation it is well-known and trivial, in GR with inflation one would have to look back more, finding earlier causal events coming from even more far away. Or, similar to the classical EM picture above, we can start with the initial values not at the singularity but later, after these photons have been emitted. Last but not least, if you don't allow for the computation starting at some quite arbitrary time, your computation fails even in your theory even with almighty computer power.
AndreiB said:
As far as I know there is no theory at this time that is capable of describing the Big-Bang. So all this is pure speculation.
The Big Bang meaning the hot early phase of the universe where the average density was, say, similar to a neutron star is understood quite well in standard cosmology. The singularity itself only shows that GR becomes invalid if the density will be too large. But for the discussion of superdeterminism this is irrelevant anyway. The point I have made is that in your construction will be anyway initial values which can have a causal influence on each of the detectors but not on the pair preparation and the corresponding other detector. If you accept this, no need for BB theory.
AndreiB said:
I don't think so. "Normal science" uses a certain model. The conclusions only apply if the model is apt for the experiment under investigation. For example, the kinetic theory of gases apply for an ideal gas. It works when the system is well approximated by such a model. If your gas is far from ideal you don't stubbornly insist on this model, you change it. In the case of Bell's theorem the model is Newtonian mechanics with contact forces only. Such a model is inappropriate for describing EM phenomena, even classical EM phenomena like induction. So, it is no wonder that the model fails to reproduce QM.
Sorry, but this is nonsense. Bell's theorem presupposes only EPR realism (which does not even mention Newton or contact forces) and Einstein causality. Classical EM fits into the theorem, thus, cannot violate the BI. GR too. Realistic quantum interpretations like dBB fulfill EPR realism, but violate Einstein causality (but not classical causality).

Other variants of the proof rely only on causality, and what they need is the common cause principle. That's all.
AndreiB said:
"Sufficiently simple" is a loaded term.
Once you think about computations with ##10^{26}## particles, let's take much less, say, ##10^9## particles. And let's name an explanation sufficiently simple if it can be shown using computations with less than ##10^9## particles. That would be fair enough, not?
AndreiB said:
And i didn't claim that the statistics does not remain stable in this case, I just don't know.
If you are right and the function behaves well, great. We will be able to compute the classical EM prediction for a Bell test. When such computation is done we will see if it turns out right or wrong.
AndreiB said:
I don't get the your point about PI. Clearly, the digits of PI are not independent since they are determined by a quite simple algorithm. Two machines calculating PI would be perfectly correlated.
But if you have two quite correlated sequences of digits, and add the sequence of the digits of ##\pi# mod 10 to one but not the other, the correlation disappears.
AndreiB said:
IF Bell correlations are caused by long-range interactions between the experimental parts one should be able to prepare macroscopic entangled states. I am not aware of any attempt of doing so.
I see no base for this. Entanglement of macroscopic states is destroyed by any interaction with the enviroment, this is called decoherence.
AndreiB said:
Clearly, all theories with long-range interactions are falsifiable. Classical EM, GR, fluid mechanics have been tested a lot. I have laid out my argument why such theories could be superdeterministic. We will know that when the function is computed. Until then you cannot rule them out.
No. There is a level of insanity of such philosophical theories where it becomes impossible in principle to rule them out. You cannot rule out, say, solipcism. Superdeterminism is in the same category. It is impossible to rule it out. Your computations would be simple computations of normal theories, without any connection to superdeterminism beyond your words. The only thing one can do with superdeterminism it to recognize that it makes no sense and to ignore it.
AndreiB said:
But there is a causal justification. There is a long-range interaction involved that determines the hidden variable. "Normal" science does not assume independence when this is the case.
False. Common sense makes a large difference between some accidental influences and systematic influences which can explain stable correlations.
AndreiB said:
As far as i can say, the independence assumption was falsified by Bell tests + EPR argument. Locality can only be maintained if the independence assumption fails. And no violation of locality was ever witnessed in "normal science", right?
Completely wrong. The independence assumption is much more fundamental than experimental upper bounds found up to now for causal influences. You cannot falsify GR by observations on soccer fields. If you evaluate what you see on a soccer field, you will not even start to question GR. Same here. You will not even start questioning the the independence assumption or the common cause principle because of some observations on the quantum field. But you use them to find out what the experiment tells us. And it tells us that Einstein causality is violated.
AndreiB said:
There is nothing wrong with my logic. If you can't prove a violation (and you can't) you cannot just assume one.
Thanks for making my point. Except that you have to replace your "you" with "I". You just assume a violation of the common cause principle.
AndreiB said:
I think you forget the really important point that without SD you have non-locality. Your arguments based on what "normal science" assumes or not do not work for this scenario, since, when you factor in the strong evidence for locality, the initial probability for a violation of the statistical independence is increased many orders of magnitude.
As explained, non-locality is not really an issue, science has developed nicely during the time of non-local Newtonian gravity. Moreover, quantum non-locality is unproblematic because it does not appear at all without some special preparation procedures.

Instead, with superdeterminism we can no longer make any statistical science.
 
  • Like
Likes Doc Al and weirdoguy
  • #128
AndreiB said:
Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
We have now written proof that you haven't understood what Bell violations are.
Go ahead and describe your macroscopic experiment. :-)
 
  • #129
Sunil said:
1. I disagree. Bohmian and other realistic interpretations of QM are, given that they give the QM predictions, necessarily contextual given the Kochen-Specker theorem.

2. And this can be also easily seen explicitly, given that the trajectories of the system are influenced by the trajectories of the measurement devices.
1. My definition of "realistic" follows EPR ("elements of reality"), which definition Bell used (for better or for worse). Admittedly, there are contextual interpretations of QM in which the measurement devices are themselves active participants in the outcome. I wouldn't call those realistic in the EPR sense, because the individual elements of reality are subjective to the observer's choice of measurement basis. 2. I'd love learn how Bohmian Mechanics factors in the angle setting of a measurement device (remote or not) to lead us to the observed statistics. Of course, I am quite aware that BM incorporates an underlying function, the value of which is unknown to us at any point in time. Equally are that there is the so-called pilot wave which guides thanks. And aware that BM lacks the traditional QM notion of spin. You might not be able to supply that mechanism, and maybe no one can yet.

------------------------

What would be nice is to see a description of a specific Bell test (say a run of 10 detected pairs by Alice and Bob) in which we see the measurement device's impact on the outcomes. We would have Alice's setting fixed at 0 degrees, and Bob's alternating between +/- 120 degrees, entangled Type 1 photons (i.e. polarization is the same) - where distance apart is not important (since locality is not a limiting factor in BM). To be specific, there is a PBS enforcing the measurement angle setting.

How is Bob's changing PBS affecting the environment such that Bob's PBS orientation at the time of detection is communicated to the other components of the setup? Because I would imagine that all the other changing dynamics in the environment's surroundings (actually the entire universe, since distance is not a factor) would contribute an overwhelming amount of "noise" as well. Why are some elements of a dynamic environment a critical factor to the observed statistics, and some not?
 
  • #130
DrChinese said:
I'd love learn how Bohmian Mechanics factors in the angle setting of a measurement device (remote or not) to lead us to the observed statistics.
"The angle setting" is just the way the device is spatially aligned. This affects the quantum potential, which depends on that spatial direction (since there is a term in it describing the magnetic field that the particles encounter as they go through the device), and that in turn affects the trajectories of the (unobservable) particles, which in turn affects the observed statistics of the measurement results.

DrChinese said:
aware that BM lacks the traditional QM notion of spin
Yes, in BM a measurement of "spin" is just a measurement of particle trajectories, like every other measurement. See above.
 
  • Like
Likes gentzen
  • #131
AndreiB said:
1. Yes, he is quoting himself because he invented the model. What's wrong with that?

't Hooft's model is an existence proof that local, deterministic theories could reproduce QM. I did not claim that his model is a true replacement for the Standard Model.

2. EPR assumed that indeed. I think they were wrong. Elements of reality do exist for the unmeasured spin components but they are different from what EPR expected.

3. What is your point with this paper?
1. Authors don't reference their own work for the purpose of demonstrating its correctness. When it is done, it is usually done to provide additional background and additional reading. In this case, 't Hooft's wild claims are not generally accepted, and so a self-reference is out of line.

The plain fact is: 't Hooft at no time has provided a CA model of a Bell test, much less of QM as a whole. Saying that one "could" be constructed is what we skeptics call "hand waving".2. If there are values for unmeasured spin components, what are they? Bell showed there were none that matched all quantum mechanical expectation values. And anyway, how are they different than what EPR assumed?

------------------------

So far, I would characterize your rambling piecemeal arguments as anti-standard model and anti-scientific consensus. All without providing any meaningful insight as to a viable opposing view. If you think 't Hooft is on to something, that's an opinion you are welcome to. However, I have explained some of the many obvious problems with his model, which is precisely why it is generally ignored by the community at large. If you are interested in more serious local realistic computer models, they are out there (see for example the work of Hans de Raedt, Kristel Michielsen et al). But there are no superdeterministic models in existence that address Bell. And without tackling Bell head on, no one is going to take 't Hooft's work in this area seriously.

DrC is out of further discussion with you in this thread.
 
  • Like
Likes vanhees71, dextercioby, weirdoguy and 2 others
  • #132
This seems like a good point at which to close the thread.
 
  • Like
  • Sad
Likes vanhees71, Motore and weirdoguy

Similar threads

  • · Replies 114 ·
4
Replies
114
Views
7K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 27 ·
Replies
27
Views
2K
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 91 ·
4
Replies
91
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
35
Views
731