A Copenhagen: Restriction on knowledge or restriction on ontology?

  • #241
DarMM said:
Well have a look at the kinds of views left to us:
  1. Every object contains an infinite number of contextual degrees of freedom that interact with the same infinite set of degrees of freedom of other objects nonlocally. Bohmian Mechanics and other nonlocal theories
  2. There is a continuous infinity of worlds, only one of which can be perceived. Everything you see about you is just a particular "slice" of the giant universal wavefunction that is correlated with the sensory apparatus of this version of you. Ultimately even space and time are an illusion, there is only the giant complex wavefunction and nothing else. Many Worlds
  3. The multiple potential futures communicate with the past to realize the present. Transactional Interpretation
  4. There is no dynamics. The history of the world is just that which solves a 4D constraint that does not permit a picture of a 3D world evolving in time in general. For microscopic objects in an experiment this constraint is solved against the presence of classical detector objects. For classical objects it is solved against the presence of other classical objects and so on. Thus the world isn't decomposable, i.e. things don't reduce to their parts because the parts have the whole and other objects on the scale of the whole as a constraint for their properties. Relational Block World
  5. QM isn't really true. The initial state of the universe was just such that we are determined to perform experiments that accidently give statistics that make it look like it is true. Superdeterminism à la 't Hooft.
  6. The world is just such a way that mathematics only goes so deep. The best you can do is a probabilistic account with the observer embedded in the description to some degree. Beyond that the world becomes non-mathematical. Copenhagen, QBism
Anything anybody comes up with is going to fall into at least one of these categories. Currently nobody really has a view that combines them, but you could have multiple retrocausal worlds for example.
Where within your classification would you place the thermal interpretation?
 
Physics news on Phys.org
  • #242
A. Neumaier said:
Where within your classification would you place the thermal interpretation?
Category 1. Though I should rephrase it possibly.

A given object has an infinite set of properties, e.g. ##\langle S_x \rangle##, ##\langle S_x^{2} \rangle## etc so you have Hardy's theorem manifesting. Those properties are contextual. And there are nonlocal properties as well.

The only thing is that ##A \otimes B## for a two photon system isn't really a property of two individual photons in the TI. I'll try to rework it as maybe "interacts" isn't the correct phrasing for the TI.
 
Last edited:
  • Informative
Likes Demystifier
  • #243
There I've edited it to say "infinite number of contextual degrees of freedom which include nonlocal degrees of freedom that contribute to the dynamics".

I think this allows for the distinction between Bohmian Mechanics and the TI. Where in Bohmian mechanics you'll have two photons interacting nonlocally. However in the TI you have a single object physicists colloquially call a "two photon system" that happens to possesses nonlocal dynamical properties.

So it removes the decomposability bias that "interact with other objects" implies.
 
  • #244
A short article from Griffiths:
https://arxiv.org/abs/1901.07050
It discusses many of the points that have come up in this thread, i.e. the CHSH inequalities coming from the assumption of the correlators ##E(a,b)## being marginals on a single sample space and also counterfactual definiteness.

Section 5.2 relates to the point I mentioned in #115. The force of Counterfactual indefiniteness isn't really affected by the fact that only one history occurs.

In case this provokes the single sample space discussion again (:nb)), remember the point is that there isn't a single sample space for the quantum observables, not that you can't get a sample space for the experimental outcomes (although even that single sample space will be "odd" with the observer embedded due to lack of a single sample space for the observables).
 
Last edited:
  • #245
And another paper by Quintino et al that shows a relation between nonlocality and incompatibility as mentioned earlier in the thread:
https://arxiv.org/abs/1902.05841
Basically if Alice's measurements and Bob's measurements are locally incompatible enough you can have correlations that would require a hidden variable theory to be nonlocal.

This follows a long literature on the subject, such as results that any set of local dichatomic incompatible POVMs can have Bell violating statistics:
https://arxiv.org/abs/0905.2998
There's also a nice paper here showing the links between incompatibility and contextuality. Has some references for how Contextuality is a resource for quantum computers:
https://arxiv.org/abs/1805.02032
Unsurprisingly you have a direct link between when Alice and Bob observables would each alone require a contextual hidden variable model and when Alice-Bob correlations would require a nonlocal model. There's a sort of trade-off between contextuality and nonlocality in any given situations:
https://arxiv.org/abs/1507.08480https://arxiv.org/abs/1603.08254https://arxiv.org/abs/1307.6710
Note the third paper again points how these results arise from assuming a common sample space.

A summary paper of the work by Cabello on the exclusivity principle have a nice demonstration of Contextuality and Nonlocality cases have the same graphs of incompatible outcomes, i.e. they can be the same thing embedded differently in spacetime:
https://arxiv.org/abs/1801.06347
And finally to tie back to the observer/agent view of Copenhagen and QBism the paper above and this paper:
https://arxiv.org/abs/1901.11412get a good amount of QM out from generalized Bayesian reasoning.
 
  • Like
Likes dextercioby
  • #246
Ontology. If we can't measure something in principle, then neither can nature.
 
  • #247
DarMM said:
A short article from Griffiths:
https://arxiv.org/abs/1901.07050

I don't think that Griffiths' argument in section 2.2 is at all a fair rebuttal to the argument that EPR violates nonlocality. The recipe for working with quantum mechanics is:
  1. When you make a measurement, it gives an eigenvalue of the observable being measured with probabilities given by the Born rule.
  2. Afterward, the state will have "collapsed" to an eigenstate of the observable corresponding to the measured eigenvalue.
In an EPR-type experiment, the state is distributed over a spacelike region, so if this collapse is taken literally, then it would seem to imply a nonlocal process. The measurement here causes an instantaneous change there.

E, P, and R were arguing that this DOESN'T happen; that the measurement here doesn't affect the measurement over there. The measurement over there is only affected by local conditions over there (the hidden variables).

Bell's inequality shows that hidden variables of the sort that E, P, and R had in mind can't explain the nonlocal correlations in an EPR-type experiment. So there isn't a realistic alternative to the "collapse" model, which is nonlocal.

Griffiths' argument in section 2.2 seems to be completely missing the point. Yes, a collapse interpretation doesn't violate locality if you don't have a distributed state. But that was the whole point of the EPR argument---that the "collapse" interpretation seems to imply nonlocality if you have a distributed state.

I'm very unimpressed.
 
  • Like
Likes zonde
  • #248
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?

I really appreciate Demystifiers persistent questioning of our understanding of the foundations of QM even if the Bohmian perspective superficially is opposite to my stance, but I started to see common junctions in the perspectives, its just that sometimes the choice of words makes this sound very different.

If I may add another comparasion/abstraction here on the notion of and ontology beeing intuitive.

As per past discussions exists a common confusion/misunderstanding into what extent quantum predictions is "subjective" or "relative to a classical measurement device", and that has led people to confuse this subjectivity with HUMAN subjectivity or that human science may be subjective, leading further to brining in the notion of human brain into the foundations of QM. This naturally leads many physicis to react strongly against such jumbo.

Similarly, one can wonder what sense there is in brining in a concept of human intuition to rate physical ontologeis/theories? As I enjoy Demystifiers questions i do not attach so much attention to choice of words, but i rather see an abstraction and analogt to the information processing agent perspective that is MY intuitive picture, and the interesting thing is that they may relate like subjectivity and relativity, relate.

In my previous comment I compared the ontology to a retained compressed code the REPRESENTS the expectation model for future processes. And in this perspective, the ontology must be "natural" in the sense that "computations" are spontaneous are easy. Ie. if we consider the observers "calculation of expectations" according to some "mechanics of his ontological structure", in my view this calculation must be a spontaneous process. And of course I would have no problem to call such a property "intuitive". Intuitive means that its something that can be executed with minial effort! Ideally spontaneously.

In this way, the notion of intuition can be understood without brining in human concepts, neuroscience or psychology. It is simply an adjective that says that the "inference" from the ontology is natural. This is exactly how i see it as well, and I think its important, and rephrasing it like this, might make others too understand without beeing rejected by the choice of words.

As long as the foundational questions remain open research questions, we can not escape words. We also can not escape mathematics of course, but we need both. And ideas need both to be explained.

My only final comment is that I would say that this abstracted notion of "intuitive" is still subjective, but not subjective as in human-human subjective; its rather subjective related to the computing agent (think matter). And this can potentially be cast in terms of foundational physics; but the circularity here again I suggest we need to think in terms of evolution. The problem always start i think when you expect a static eternal starting point from where to construct everything, ignoring that the constructing process itsel needs structure for guidance. Its like matter and geometry. But we talk about ontolgoy and epistemology. The evolved ontolgoy is necessarily natural by definition. But even naturality has variation.

/Fredrik
 
  • #249
stevendaryl said:
I don't think that Griffiths' argument in section 2.2 is at all a fair rebuttal to the argument that EPR violates nonlocality
The original EPR argument doesn't violate nonlocality because it has a local model right?

stevendaryl said:
The measurement here causes an instantaneous change there
I think this is trivially avoided in Bohr and Neo-Copenhagen with the wavefunction being epistemic. The whole EPR argument can be repeated in Spekkens toy model or many other local hidden variable models. Similarly they have a collapse but nothing is nonlocal.

It's the CHSH inequalities that have a true implication, not EPR.

stevendaryl said:
Griffiths' argument in section 2.2 seems to be completely missing the point. Yes, a collapse interpretation doesn't violate locality if you don't have a distributed state
I think his argument is more in section 5.2 that nonlocality arises from presuming a common sample space/realism for the observables. It's alternate way out from nonlocality.

Of course whether one likes the "price" of what the lack of a common sample space implies is the issue, i.e. the absence of conventional realism.
 
Last edited:
  • #250
DarMM said:
The original EPR argument doesn't violate nonlocality because it has a local model right?

The original EPR argument was trying to show that QM doesn't violate locality. It failed because of Bell's argument.

I think this is trivially avoided in Bohr and Neo-Copenhagen with the wavefunction being epistemic.

No, being epistemic doesn't accomplish anything. Saying that ##\psi## is epistemic is to say that it isn't the reality, it's only a reflection of our information about that reality. So the issue then becomes: what is the reality, and is it nonlocal?

My point is that Griffiths' is misunderstanding the EPR argument if he thinks that showing that violations of Bell's inequality can be achieved for situations that are intrinsically local. That's got things backwards. The Bell argument isn't that nonlocality is required to violate his inequality. There is no problem with coming up with a causal model that violates it locally. The difficulty is coming up with a causal model that violates it for spacelike separated observables.
 
  • #251
Just to be clear, what ultimately is your argument in this thread. Is it that contrary to conventional wisdom that giving up conventional realism doesn't save locality and that QBism and Copenhagen are actually nonlocal?

I'm also still confused by this notion that contextuality is just "mathematical" and of no real foundational relevance. How do you square such a view with all of its consequences in Quantum Information alone?
 
  • #252
DarMM said:
Just to be clear, what ultimately is your argument in this thread. Is it that contrary to conventional wisdom that giving up conventional realism doesn't save locality and that QBism and Copenhagen are actually nonlocal?

I was specifically responding to the argument by Griffiths' in the link you posted. It seems to be based on a misunderstanding of the EPR and Bell arguments.

As far as what I personally think, it seems to me that there are only three possibilities that make any sense: (1) something nonlocal is going on, or (2) QM is wrong (or at least incomplete), or (3) something like Many-Worlds is true (in spite of what appears to be the case, multiple macroscopically different versions of the world can exist simultaneously).
 
  • #253
stevendaryl said:
As far as what I personally think, it seems to me that there are only three possibilities that make any sense: (1) something nonlocal is going on, or (2) QM is wrong (or at least incomplete), or (3) something like Many-Worlds is true (in spite of what appears to be the case, multiple macroscopically different versions of the world can exist simultaneously)
For what reason do you exclude the other ways out of Bell's theorem?
 
  • #254
DarMM said:
For what reason do you exclude the other ways out of Bell's theorem?

I can't make any sense of them.
 
  • #255
stevendaryl said:
I can't make any sense of them.
That seems strange to me as the retrocausal and acausal approaches do manage to replicate a good deal of QM where as MWI can't get out the Born rule so replicates no predictions at all. Ruth Kastner has even managed to get out some of QED. Not that I'm convinced by them either since there are many calculations they haven't replicated as of 2019.
 
  • #256
Actually, in my opinion, there is huge misunderstanding today in quantum mechanics. I can not see the difference between De Broglie's formula and Heisenberg's uncertainty principle. If someone understands that, then your question is answered!
 
  • #257
DarMM said:
That seems strange to me as the retrocausal and acausal approaches do manage to replicate a good deal of QM

I lump those in with being nonlocal. If you have effects traveling both forward and backward in time, then spacelike separations are no impediment. A spacelike separation is a combination of two timelike separations, if you allow both directions in time.

where as MWI can't get out the Born rule so replicates no predictions at all.

I agree that MWI has unresolved problems, but they are actually problems with QM in general, but they show up more explicitly in MWI.

When I say "Many-Worlds" I mean in general, theories in which measurements results and macroscopic variables don't have definite values, that you can have superpositions of macroscopically distinguishable states. To me, this is just a consequence of QM. There is nothing in QM that limits the size of system to which it applies. So it's actually inconsistent (what I've called a "soft inconsistency") to assume QM applies to everything and also that measurements have definite outcomes. So something like Many-Worlds is, to me, implied by QM.

But I certainly appreciate the problems with that: If measurements don't have definite outcomes, then it's tough to make sense of Born's rule.
 
  • #258
stevendaryl said:
I lump those in with being nonlocal. If you have effects traveling both forward and backward in time, then spacelike separations are no impediment. A spacelike separation is a combination of two timelike separations, if you allow both directions in time.
I can see why you'd think that but I don't think it is valid. The retrocausal views have no space like propagation and the acausal views have no propagation at all being involved in the violation of the CHSH inequalities.

When I say "Many-Worlds" I mean in general, theories in which measurements results and macroscopic variables don't have definite values, that you can have superpositions of macroscopically distinguishable states. To me, this is just a consequence of QM. There is nothing in QM that limits the size of system to which it applies. So it's actually inconsistent (what I've called a "soft inconsistency") to assume QM applies to everything and also that measurements have definite outcomes.
However despite people attempting to prove this is a contradiction nobody has produced a mathematical theorem to that effect, that was the purpose of Frauchiger-Renner and spin offs.

But I certainly appreciate the problems with that: If measurements don't have definite outcomes, then it's tough to make sense of Born's rule.
The Born rule is needed to derive the macroscopic worlds being classical at all, so the problems go quite deep.
 
  • #259
DarMM said:
I can see why you'd think that but I don't think it is valid. The retrocausal views have no space like propagation and the acausal views have no propagation at all being involved in the violation of the CHSH inequalities.

Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.

However despite people attempting to prove this is a contradiction nobody has produced a mathematical theorem to that effect, that was the purpose of Frauchiger-Renner and spin offs.

The foundations of QM are too fuzzy to derive a tight contradiction. That isn't a plus, in my opinion.

The Born rule is needed to derive the macroscopic worlds being classical at all, so the problems go quite deep.

Yes, the problems are very deep, and nobody has a clue. Except maybe the Bohmians.
 
  • Like
Likes Spinnor
  • #260
stevendaryl said:
The foundations of QM are too fuzzy to derive a tight contradiction. That isn't a plus, in my opinion
The Frauchiger-Renner arguments don't fail for fuzzy reasons though, but due to specific mathematical properties of the theory like intervention sensitivity.

stevendaryl said:
Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.
Certainly, but the acausal views don't have nonlocal interactions or nonlocal degrees of freedom so they simply are not nonlocal.
 
Last edited:
  • #261
atyy said:
I've read this now and a few of its references. It derives the same kind of trade off we've been discussing here, but in the language of causal networks. You recover locality, but at the cost of embedding the observer/the notion of agent into the theory.

So it's a choice between Nonlocality and Participatory realism as we had above.

It can be extended to include the other views with the alternate causal graphs that replicate the Bell inequalities as Pusey and Leifer do in their retrocausality paper (https://arxiv.org/abs/1607.07871).

So you can derive the same kind of result with causal graphs, multiple sample spaces, counterfactual indefiniteness, etc there are a few different ways of doing it.
 
  • Like
Likes atyy
  • #262
stevendaryl said:
My point is that Griffiths' is misunderstanding the EPR argument if he thinks that showing that violations of Bell's inequality can be achieved for situations that are intrinsically local. That's got things backwards. The Bell argument isn't that nonlocality is required to violate his inequality. There is no problem with coming up with a causal model that violates it locally. The difficulty is coming up with a causal model that violates it for spacelike separated observables.
I think Griffith's argument is a bit more than this in light of his Section 5.

What kind of causal models violate them locally?

I think this relates back to contextuality/counterfactual indefiniteness. By the Kochen Specker theorem a hidden variable theory has to be contextual. This removes the agential notions from the theory, but at the so far minor cost of contextuality.

However when you have spacelike seperation, the context is extended over a large region of spacetime and thus to "act in accordance with the context" it has to be nonlocal. Aravind has a good example in https://arxiv.org/abs/quant-ph/0701031 where you see an example of the link between contextuality and nonlocality. So Bell's theorem is sort of the spacelike version of the timelike Kochen-Specker. Cabello discusses this at the start of his paper: https://arxiv.org/abs/1801.06347

However all this happens only if you try to introduce the hidden variables. If you just swallow Participatory Realism there is no such context dependence and thus you don't end up with nonlocality.

Of course Participatory Realism is troubling for different reasons.
 
Last edited:
  • #263
Demystifier said:
The so called "Copenhagen" interpretation of QM, known also as "standard" or "orthodox" interpretation, which is really a wide class of related but different interpretations, is often formulated as a statement that some things cannot be known. For instance, one cannot know both position and momentum of the particle at the same time.
.
.
.
But on other hand, it is also not rare that one formulates such an interpretation as a statement that some things don't exist. For instance, position and momentum of the particle don't exist at the same time.

Which of those two formulations better describes the spirit of Copenhagen/standard/orthodox interpretations?
What you have described here is not Copenhagen. What you have written is very much closer to the notion of Complementarity (as promoted by Wilczek, et al). A litte more about this below, but to circle back to Copenhagen briefly,

Copenhagen consists of three main pillars.
1. There are things in the world called "observers" (which correspond to our intuitive notion of people/scientists/grad students).
2. There are events in the world called "measurements" (which correspond with our intuitive notion of measuring a system.)
3. It does not make sense to ask about a particle's position prior to measurement. In another thread you quoted Bohr's remark that QM is a theory about what is measured, not what is "out there" in the world.

Copenhagen was satisfactory in the early years of the theory, since no serious scientists was going to demand a formal definition of a human observer, or ask pathological questions about "measurement". Eventually, everyone did both things, giving rise to a pantheon of various modern interpretations. "observers" are also made of particles, and "measurement" is reformulated as some kind of information copying.

Contemporary notions of Complementarity use the verb "to know" for a specific reason. Any information about the system's state could be stored somewhere independent of the act of measurement (say in the RAM of a computer). It appears, for all intents, that the mere possibility of an observer being able to retreive this information is enough, by itself, to remove both interference and destroy entanglement. The situation would be far more palatable if the act of measuring was the skeleton key to destroy superposition. Complementarity says this is far more subtle. The mere possibility of that information leaking into the environment will do the trick. The universe conspires to disallow you to know.

Demystifier said:
To be sure, adherents of such interpretations often say that those restrictions refer to knowledge, without saying explicitly that those restrictions refer also to existence (ontology).
Actually, Quantum Bayesianists would explicity state that the restrictions do not apply to ontology. (I'm not an advocate myself) but they would say that any and all of these restrictions derive from the observer's knowledge.

Demystifier said:
Moreover, some of them say explicitly that things do exist even when we don't know it. But in my opinion, those who say so are often inconsistent with other things they say. In particular, they typically say that Nature is local despite the Bell theorem, which is inconsistent. It is inconsistent because the Bell theorem says that if something (ontology, reality, or whatever one calls it) exists, then this thing that exists obeys non-local laws. So one cannot avoid non-locality by saying that something is not known. Non-locality implied by the Bell theorem can only be avoided by assuming that something doesn't exist. Hence any version of Copenhagen/standard/orthodox interpretation that insists that Nature is local must insist that this interpretation puts a severe restriction on the existence of something, and not merely on the possibility to know something.
I am not sure you have characterized Bell's Theorem correctly here, but I wanted to say something else.

Modern physics separated itself from classical notions of ontology around the years of the EPR debates. We have, as a civilization a tools called QM and QFT, and those tools standing innocently on their own cannot identify what Einstein called objective "elements of reality". If the position of a particle is not an element-of-reality, then what is? Perhaps the objective element-of-reality we seek is the Quantum State (PBR Theorem). Or perhaps it is the Wave Function as some kind of extended wave ( as advocates of Many Worlds claim).

You will undoubtedly detect what you call "inconsistencies" in what people write about these topics. English has limits and it will fray at the edges. While the formal tools cannot identify elements of reality, the people who use them definitely think they can. These inconsistencies are unlikely to go away soon.
 
  • Like
Likes *now*, DarMM and dextercioby
  • #264
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?
The point of a non-intuitive interpretation is to change one's intuition to match the physics. The ether was an intuitive interprettion, but there was no physics in an ether theory and the intuition it imparted leada to incorrect physics without adding more baggage to eliminate it. Similaly in quntum mechanics, a limitation on what nature can specify contains real physics. Trying to evade it (e.g., bohmian mechanics) should lead to different physical predictions. If it does not, then it's only intuitive to the extent that the intuition it provides is wrong and actually elimintes the real physics. The only difference between an intuitive interpretation and a non-intuitive one is that intuition sometimes follows from having to accept new physics that has no counter part in everyday intuition to analogize.
 
  • #265
hyksos said:
Perhaps the objective element-of-reality we seek is the Quantum State (PBR Theorem)
Just to expand on this, in my understanding the PBR theorem says if you want a theory obeying the ontological models framework's axioms to match quantum mechanical predictions and preparation independence then the state space of the theory has to include the wavefunction.

So one can either accept the wavefunction as real or reject the ontological models framework's axioms.
 
  • #266
stevendaryl said:
Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.
To me, non-local is not the same category as retrocausal, precisely because the former need have no causality outside the past light cone (nor back in time influences), while retrocausal explicitly does.

Consider the events E1: preparation of entangled particles; E2: choice of a measurement of particle 1, and its result; E3 choice of a measurement of particle 2 and its result; E4: observation of the results of E2 and E3.

The physics at E2 is wholly unaffected by E3, even in principle. Similarly for E3 viz E2.

Nonlocality only enters into observations (physics) at E4, whence E1, E2, and E3 are all in its past light cone. The nonlocal aspect is simply that modeling the correlations observable at a sequence of E4s involves both E2 and E3 information, and that model cannot treat E2 and E3 as inpependent (nor determined by state at E1). Note that immediately after E2 you cannot predict what will be seen at E4 because you have no idea what measurement will be taken at E3.

[edit: that is, the correlations seen at E4 are caused by preparation at E1, choice of measurement at E2, and choice of measurement at E3. No part of this is outside normal SR causal structure.]
 
Last edited:
  • #267
Demystifier said:
What evidence do we have for the claim that the real stuff cannot be described mathematically?
How would anybody be able to determine this one way or the other?
 
  • #268
PAllen said:
To me, non-local is not the same category as retrocausal, precisely because the former need have no causality outside the past light cone (nor back in time influences), while retrocausal explicitly does.

Unfortunately, that reasoning is flawed. If events are spacelike seperated, then a lorentz transformation can make them simultaneous or make either event occur before the other. So, if you assume one event causes the other, you can make a lorentz transform to a frame in which that is false, which requires the event you assumed was the cause to act retrocausally. That is what non-local means.
 
  • #269
bobob said:
Unfortunately, that reasoning is flawed. If events are spacelike seperated, then a lorentz transformation can make them simultaneous or make either event occur before the other. So, if you assume one event causes the other, you can make a lorentz transform to a frame in which that is false, which requires the event you assumed was the cause to act retrocausally. That is what non-local means.
This isn't true if retrocausal influences occur only within the light cone.

And it is certainly true in acausal theories for which there is no physical propogation, but imply a constraint on the 4D history.
 
  • Like
Likes dextercioby
  • #270
bohm2 said:
How would anybody be able to determine this one way or the other?
The argument is essentially the failure of hidden variable theories and how fine-tuned they have to be. However like most interpretative arguments it's not definitive.
 

Similar threads

  • · Replies 292 ·
10
Replies
292
Views
10K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
35
Views
737
  • · Replies 40 ·
2
Replies
40
Views
2K
  • · Replies 226 ·
8
Replies
226
Views
23K
  • · Replies 376 ·
13
Replies
376
Views
21K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 76 ·
3
Replies
76
Views
6K
  • · Replies 37 ·
2
Replies
37
Views
3K
Replies
133
Views
9K