Is there any hope at all for Locality?

  • Thread starter Thread starter andrewkirk
  • Start date Start date
  • Tags Tags
    Locality
Click For Summary
The discussion centers on the implications of Bell's theorem and the EPR paradox regarding locality in quantum mechanics. It highlights that experiments have shown correlations predicted by quantum mechanics rather than those expected from hidden variable theories that maintain locality. The conversation explores the tension between accepting non-realism or non-counterfactual definiteness and the preservation of locality, suggesting that instantaneous effects challenge traditional notions of locality. Superdeterminism is mentioned as a potential solution, though it is met with skepticism. Ultimately, the dialogue reflects a deep uncertainty about reconciling quantum mechanics with classical ideas of locality and realism.
  • #91
bhobba said:
Scratching head. I can't see how ANY local realistic theory can be compatible with Bell.

What am I missing?

Thanks
Bill
Bell started from idealized assumptions that may never be realizable in the real world and there are different understandings of what "local realistic" supposedly means. According to some experts in the field, not necessarily all possible "non-spooky" ideas that could match reasonable experimental agreement with QM are covered by Bell's derivation. There have been derivations by others (such as Hellman) based on more realistic assumptions; I have no idea how solid those are. Thus, pragmatic and careful commentators use such terms as "Bell locality" in order to clearly delineate what has been established in theory.
 
Physics news on Phys.org
  • #92
harrylin said:
I have no idea how solid those are. Thus, pragmatic and careful commentators use such terms as "Bell locality" in order to clearly delineate what has been established in theory.

I think I would need to see the detail before understanding how its possible. I have been through Bell from reading his papers and from textbooks and for me it seems pretty airtight.

Thanks
Bill
 
  • #93
bhobba said:
I think I would need to see the detail before understanding how its possible. I have been through Bell from reading his papers and from textbooks and for me it seems pretty airtight.

Thanks
Bill
The derivation by Hellman that I mentioned, supposedly proves Bell's theorem for imperfect correlations (but still adhering to Bell's assumptions about "local realism" I suppose; Bell addressed what he understood EPR to mean with that). My opinion on this is still swayed in opposite directions by papers on each side of the debate.
 
Last edited:
  • #94
I consider it this way, I am sorry for digressing from the above discussion.
Its the play of the mathematical Fourier transformation. Position and momentum, energy-time and all the other conjugate variable pairs are related by the mathematical Fourier transform. This mathematical relationship, handicaps us and thus non-locality. Localizing a particle in the position space will cause its conjugate variable to have infinite uncertainty, possibly if someone could see the mathematics more, it may help!

It is just an opinion, waiting for criticisms and insight!
 
  • #95
bhobba said:
Scratching head. I can't see how ANY local realistic theory can be compatible with Bell.

What am I missing?

Thanks
Bill

Well, the most straight-forward proof that the spin-1/2, twin-pair EPR experiment is incompatible with a local realistic model unrealistically assumes that

  1. Every pair of particles is detected.
  2. No "stray" particles (not from a twin pair) are detected.
  3. The two experimenters correctly associate corresponding detections.

With those assumptions, it's easy to demonstrate that there can't be a local, realistic model accounting for the predictions of QM.

But a real experiment might (and likely will) violate all three of these assumptions. Taking into account the possibility of errors makes the analysis a lot more complicated. At that point, it's beyond me whether the analysis was done correctly or not, so I have to take it on faith. But then if a paper claims that actual experiments have failed to account for possibility X, I don't have any way of judging that.
 
  • #96
andrewkirk said:
But I can't see how even accepting that (ie accepting non-realism or non-counterfactual definiteness) allows us to still believe in locality in the face of the Bell theorem and the subsequent experiments.
Yes, some physicists (e.g. Norsen, Gisin, etc.) have argued that Bell's theory implies non-locality regardless of other issues (i.e. non-realism, hidden variables, determinism, etc.):
One can divide reasons for disagreement (with Bell’s own interpretation of the significance of his theorem) into two classes. First, there are those who assert that the derivation of a Bell Inequality relies not just on the premise of locality, but on some additional premises as well. The usual suspects here include Realism, Hidden Variables, Determinism, and Counter-Factual-Definiteness. (Note that the items on this list are highly overlapping, and often commentators use them interchangeably.) The idea is then that, since it is only the conjunction of locality with some other premise which is in conflict with experiment, and since locality is so strongly motivated by SR, we should reject the other premise. Hence the widespread reports that Bell’s theorem finally refutes the hidden variables program, the principle of determinism, the philosophical notion of realism, etc...

Since all the crucial aspects of Bell’s formulation of locality are thus meaningful only relative to some candidate theory, it is perhaps puzzling how Bell thought we could say anything about the locally causal character of Nature. Wouldn’t the locality condition only allow us to assess the local character of candidate theories? How then did Bell think we could end up saying something interesting about Nature?...That is precisely the beauty of Bell’s theorem, which shows that no theory respecting the locality condition (no matter what other properties it may or may not have – e.g., hidden variables or only the non-hidden sort, deterministic or stochastic, particles or fields or both or neither, etc.) can agree with the empirically-verified QM predictions for certain types of experiment. That is (and leaving aside the various experimental loopholes), no locally causal theory in Bell’s sense can agree with experiment, can be empirically viable, can be true. Which means the true theory (whatever it might be) necessarily violates Bell’s locality condition. Nature is not locally causal.
Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

Of course, others argue that it is still possible to have a local and realistic model if one is willing to not deny the possibility of retrocausality:
Similarly, the signicance of Bell's work in the 1960s would have seemed quite different. Bell's work shows that if there's no retrocausality, then QM is nonlocal, in apparent tension with special relativity. Bell knew of the retrocausal loophole, of course, but was disinclined to explore it. (He once said that when he tried to think about backward causation he "lapsed quickly into fatalism".
Dispelling the Quantum Spooks-a Clue that Einstein Missed?
http://arxiv.org/pdf/1307.7744.pdf
 
  • #97
andrewkirk said:
As I understand it, EPR proposed their entanglement thought experiment as a means of demonstrating that Quantum Mechanics was incomplete, and hence that the Copenhagen interpretation (which says that the wave function is a complete description of the state of a system) was wrong. They postulated the existence of hidden variables as a way of 'completing' the theory. Here 'hidden' just means 'not in any way reflected in the wave function'.

Bell proved that any extension of QM that uses hidden variables will predict correlations for measurements of entangled particles that differ from what QM predicts, if the principle of locality is to be maintained.

Aspect et al showed, subject to various minor loopholes on which most people seem to place not much reliance, that experimentally observed correlations follow the QM predictions rather than those predicted by a hidden variable theory that preserves locality.

From this we inductively conclude that there is no valid hidden variable theory that preserves locality.

Various presentations of this topic suggest that the tests of Bell's theorem have shown that we cannot maintain both locality and something else, where that something else is variously described as realism, counterfactual definiteness, or other similarly vague-seeming terms. This seems consistent with EPR's and Bell's original ideas, which were to challenge or defend the Copenhagen interpretation that a particle does not have a definite position and momentum unless it is in an eigenstate of one of the two operators.

But I can't see how even accepting that (ie accepting non-realism or non-counterfactual definiteness) allows us to still believe in locality in the face of the Bell theorem and the subsequent experiments. The correlations in Bell's theorem imply that Alice measuring spin along a certain axis has an instantaneous effect on the probability distribution of the results of Bob's measurement. So retreating into the indeterminacy of the Copenhagen interpretation does not appear to have allowed us to preserve locality since an instantaneous effect has occurred across a spacelike interval.

I realize that this is a hand-wave rather than a mathematical proof, but I find myself unable to imagine what sort of a theory (extension of QM) or interpretation could remain consistent with the Bell results while still preserving locality.

I would be grateful for any light that contributors are able to shed on my fog of puzzlement.




I like to think physicists are sometimes a bit biased... the universe is not only non-local, it is also a local phenomenon. We are living proof of it.
 
  • #98
bohm2 said:
Yes, some physicists (e.g. Norsen, Gisin, etc.) have argued that Bell's theory implies non-locality regardless of other issues (i.e. non-realism, hidden variables, determinism, etc.):

Local Causality and Completeness: Bell vs. Jarrett
http://arxiv.org/pdf/0808.2178v1.pdf

Of course, others argue that it is still possible to have a local and realistic model if one is willing to not deny the possibility of retrocausality:

Dispelling the Quantum Spooks-a Clue that Einstein Missed?
http://arxiv.org/pdf/1307.7744.pdf

I call retrocausal interpretations "non-realistic" because they are contextual/observer dependent. To me, a realistic interpretation means that counterfactual observations are possible. However, not everyone defines things the same as I. Bohmians (I guess you are one) call their interpretation "realistic" even though it is as realistic (or unrealistic) as the retrocausal ones (since it is also contextual/observer dependent).
 
  • #99
harrylin said:
Anyway, if in the coming weeks I can't find an error in the program (which is not by DeRaedt) then I will be very much interested to see the refutation. :-p

Their model is not a theory. It is a computer simulation. When I ran it (a particular version), it worked successfully as Kristel indicated it would. I have never completed my analysis of the program, it's one of my long-standing to-do's. :smile: However, the version I ran exploited the fair sampling loophole - which has been long closed.

So my point is: there are no existing local realistic theories on the table to disprove. I am confident because of Bell that none is forthcoming either.
 
  • #100
DrChinese said:
I call retrocausal interpretations "non-realistic" because they are contextual/observer dependent. To me, a realistic interpretation means that counterfactual observations are possible. However, not everyone defines things the same as I. Bohmians (I guess you are one) call their interpretation "realistic" even though it is as realistic (or unrealistic) as the retrocausal ones (since it is also contextual/observer dependent).
Yes, I recall you mentioning this before. As an aside, in order to maintain Lorentz invariance there are Bohmian models that are also retrocausal:
A version of Bohm’s model incorporating retrocausality is presented, the aim being to explain the nonlocality of Bell’s theorem while maintaining Lorentz invariance in the underlying ontology...The aim of this paper is to construct a version of Bohm’s model that also includes the existence of backwards-in-time influences in addition to the usual forwards causation. The motivation for this extension is to remove the need in the existing model for a preferred reference frame. As is well known, Bohm’s explanation for the nonlocality of Bell’s theorem necessarily involves instantaneous changes being produced at space-like separations, in conflict with the “spirit” of special relativity even though these changes are not directly observable. While this mechanism is quite adequate from a purely empirical perspective, the overwhelming experimental success of special relativity (together with the theory’s natural attractiveness), makes one reluctant to abandon it even at a “hidden” level. There are, of course, trade-offs to be made in formulating an alternative model and it is ultimately a matter of taste as to which is preferred. However, constructing an explicit example of a causally symmetric formalism allows the pros and cons of each version to be compared and highlights the consequences of imposing such symmetry. In particular, in addition to providing a natural explanation for Bell nonlocality, the new model allows us to define and work with a mathematical description in 3-dimensional space, rather than configuration space, even in the correlated many particle case.
Causally Symmetric Bohm Model
http://arxiv.org/ftp/quant-ph/papers/0601/0601095.pdf
 
  • #101
DrChinese said:
Their model is not a theory. It is a computer simulation. When I ran it (a particular version), it worked successfully as Kristel indicated it would. I have never completed my analysis of the program, it's one of my long-standing to-do's. :smile: However, the version I ran exploited the fair sampling loophole - which has been long closed.

So my point is: there are no existing local realistic theories on the table to disprove. I am confident because of Bell that none is forthcoming either.

As I said, possibly in another thread, there is a slight subtlety in concluding definitely that locally realistic explanations for the experimental results are not possible. Here are the facts, as I understand them:

  1. Bell proved (to my satisfaction, anyway) that locally realistic models must satisfy a certain inequality for distant correlations. (Or you can substitute the CSH inequality)
  2. The predictions of quantum mechanics violate this inequality.
  3. The predictions of quantum mechanics are confirmed by experiment\dagger.

So that sounds pretty conclusive, except for the statement marked with \dagger. What's the hangup? Well, what's actually confirmed is that processed data from experiments confirms the quantum mechanical predictions. The processing involves throwing data that is erroneous--missed detections, spurious detections, etc. It seems possible to me, and I don't know enough of the details to say conclusively one way or the other, that inadvertently nonlocal correlations are introduced by the processing. I'm not saying that this does happen, only that you have to look carefully to make sure that it doesn't.
 
  • #102
stevendaryl said:
As I said, possibly in another thread, there is a slight subtlety in concluding definitely that locally realistic explanations for the experimental results are not possible. Here are the facts, as I understand them:

  1. Bell proved (to my satisfaction, anyway) that locally realistic models must satisfy a certain inequality for distant correlations. (Or you can substitute the CSH inequality)
  2. The predictions of quantum mechanics violate this inequality.
  3. The predictions of quantum mechanics are confirmed by experiment\dagger.

So that sounds pretty conclusive, except for the statement marked with \dagger. What's the hangup? Well, what's actually confirmed is that processed data from experiments confirms the quantum mechanical predictions. The processing involves throwing data that is erroneous--missed detections, spurious detections, etc. It seems possible to me, and I don't know enough of the details to say conclusively one way or the other, that inadvertently nonlocal correlations are introduced by the processing. I'm not saying that this does happen, only that you have to look carefully to make sure that it doesn't.

All of the loopholes have been closed. No hangup for me at all! :smile:

And it doesn't matter if nonlocal effects appear, that is permissible.
 
  • #103
Anyway, apart from bell inequalities there are other arguments confirming non-locality:

"Finally, our results demonstrate that one doesn’t need the “big guns” of Bell’s theorem to rule out locality for any theories in which ψ is given ontic status; more straightforward arguments suffice. Bell’s argument is only necessary to rule out locality for ψ-epistemic hidden variable theories."
http://arxiv.org/pdf/0706.2661v1.pdf
http://link.springer.com/article/10.1007/s10701-009-9347-0
.
 
  • #104
DrChinese said:
All of the loopholes have been closed. No hangup for me at all! :smile:
And it doesn't matter if nonlocal effects appear, that is permissible.

I am with Dr Chinese on this.

All loopholes seem to be closed - local realistic theories are ruled out. Unless one comes up with a specific counter example like De Brogle-Bohm was with Von Neumann's proof then I think the evidence is far too overwhelming. And in Von Neumann's proof the error (it wasn't an error in the theorem per see - you do not expect a mathematician of Von Neumann's caliber to make errors in proofs - and he didn't - but a key assumption he made - namely for hidden variable theories the observable of a sum must be the sum of observables - it holds in the mean but not necessarily otherwise) was glaring if anyone had given it just a bit of thought - and it is a slight mystery why only a few people like Grette Hermann pointed it out. That does not apply here however- the current theorems have been given a HUGE amount of attention, so much so that I think one can assume they are as watertight as you can reasonably get.

Thanks
Bill
 
  • #105
I Guess if QM and Space-Time is emergent as several independent thinkers suggest: Nima Arkani-Hamed et al., Gerard 't Hooft and most recently Lee Smolin et al. then you could get a deterministic interpretation that isn't non-local *or* local because Space itself isn't fundamental.

It seems this is becoming the unavoidable conclusion for realists. We all know GR and QM is in trouble and the measurement problem has no satisfactory solution. Both may be solved by a deeper underlying theory.

Sure it seems radical, but no more radical than *any* other interpretation when you really think about it
 
  • #106
Quantumental said:
I Guess if QM and Space-Time is emergent as several independent thinkers suggest: Nima Arkani-Hamed et al., Gerard 't Hooft and most recently Lee Smolin et al. then you could get a deterministic interpretation that isn't non-local *or* local because Space itself isn't fundamental.

These theorems only apply to QM. They may or may not apply to a theory from which QM emerges as classical mechanics emerges from QM. However it does seem highly unlikely a theory from which QM emerges would have local realism and QM not.

Thanks
Bill
 
  • #107
But naturally as there is no Space in these proposals then I don't see the problem.

Realistic causality is still alive.

Think about it in terms of computer code. The code is deterministic and real, but talk of local/nonlocal makes no sense in that context. The Space of a simulation ran by this computer code isn't the most fundamental and non-local Things can occur in such a thing.I am beginning to agree with these intellectual giants more and more. QM just won't yield it's own interpretation. Ever. The attempts by Wallace et al. is as "hardcore" as it gets and they still don't got the born rule nor preferred basis solved
 
  • #108
Quantumental said:
QM just won't yield it's own interpretation. Ever. The attempts by Wallace et al. is as "hardcore" as it gets and they still don't got the born rule nor preferred basis solved

You are saying that QM as a science is just fine within any interpretation that is consistent with QM. All of the interpretations are a simply a search for a higher truth. That sounds trivial as I write it, but it's an important point, isn't it?
 
  • #109
audioloop said:
Anyway, apart from bell inequalities there are other arguments confirming non-locality:

"Finally, our results demonstrate that one doesn’t need the “big guns” of Bell’s theorem to rule out locality for any theories in which ψ is given ontic status; more straightforward arguments suffice. Bell’s argument is only necessary to rule out locality for ψ-epistemic hidden variable theories."
http://arxiv.org/pdf/0706.2661v1.pdf
http://link.springer.com/article/10.1007/s10701-009-9347-0

.
Ehm no, it just means that "local realism" models cannot work if they model quantum states as reality itself instead of our knowledge of reality - that's basically the same issue. :smile:

"we show that for models wherein the quantum state has the status of something real, the failure of locality can be established through an argument considerably more straightforward than Bell’s theorem. [..] the same reasoning is present in Einstein’s preferred argument for incompleteness"
 
  • #110
bhobba said:
I am with Dr Chinese on this.

All loopholes seem to be closed - local realistic theories are ruled out. Unless one comes up with a specific counter example like De Brogle-Bohm was with Von Neumann's proof then I think the evidence is far too overwhelming. And in Von Neumann's proof the error (it wasn't an error in the theorem per see - you do not expect a mathematician of Von Neumann's caliber to make errors in proofs - and he didn't - but a key assumption he made - namely for hidden variable theories the observable of a sum must be the sum of observables - it holds in the mean but not necessarily otherwise) was glaring if anyone had given it just a bit of thought - and it is a slight mystery why only a few people like Grette Hermann pointed it out. That does not apply here however- the current theorems have been given a HUGE amount of attention, so much so that I think one can assume they are as watertight as you can reasonably get.

Thanks
Bill
A theorem of physics necessarily includes the physical assumptions about the validity of the application of the mathematics. There is a number of professors in that field who published where exactly the error(s) are according to them. Those publications are far less in number than publications that accept the theorem, which is in part due to publication bias - for example, yesterday I learned that one such paper which I found in Arxiv and of which I would have liked a reviewed version to discuss here, wasn't accepted by Physical Review Letters because it lacks novelty! In other words, if already a few papers have been published that appear to disprove Bell's theorem then according to those editors, that should be enough for their readers (you).
 
  • #111
harrylin said:
There is a number of professors in that field who published where exactly the error(s) are according to them. Those publications are far less in number than publications that accept the theorem, which is in part due to publication bias...

Yeah, there's a bias in scientific journals against bad papers. By far, the majority of the anti-Bell papers are mistaken. I think they all are, but I haven't read all of them.
 
  • #112
stevendaryl said:
Yeah, there's a bias in scientific journals against bad papers. By far, the majority of the anti-Bell papers are mistaken. I think they all are, but I haven't read all of them.
That may certainly be part of the issue. However, as your comment on my words doesn't match what I wrote (exactly following the part that you cited): are you suggesting that those editors are dishonest, or did you misunderstand my explanation to Bill?
 
  • #113
harrylin said:
A theorem of physics necessarily includes the physical assumptions about the validity of the application of the mathematics. There is a number of professors in that field who published where exactly the error(s) are according to them. Those publications are far less in number than publications that accept the theorem, which is in part due to publication bias - for example, yesterday I learned that one such paper which I found in Arxiv and of which I would have liked a reviewed version to discuss here, wasn't accepted by Physical Review Letters because it lacks novelty! In other words, if already a few papers have been published that appear to disprove Bell's theorem then according to those editors, that should be enough for their readers (you).

"Lack of novelty" usually means that there is not enough new ideas in the paper. If, for example, the paper is largelly based on ideas that have been published elsewhere then it won't get published in PRL. Nor will it get published if it is too similar to work that has been already been published by others.
The "novelty" criteria is much stricter in PRL than in most other journals (e.g. the other Physical Review journals will accept manuscripts that are "only" extentions or improvements of previously published results).

Hence, in order for a paper of this type to be accepted by PRL it would -even if it is correct- have to be based on a complettely new approach, which is somewhat unlikely considering how long Bell's theorem has been around.
 
  • #114
harrylin said:
In other words, if already a few papers have been published that appear to disprove Bell's theorem then according to those editors, that should be enough for their readers (you).

Yea I get it. But what we really need, if Bells theorem is to be attacked, is a counter example like De Broglie-Bohm was to Von Neumann. Do you know of any? Without that I have a hard time believing there is an issue. And we also have Gleason's Theorem - it needs to evade that as well and be explicitly contextual. A tough ask.

Thanks
Bill
 
  • #115
f95toli said:
"Lack of novelty" usually means that there is not enough new ideas in the paper. If, for example, the paper is largelly based on ideas that have been published elsewhere then it won't get published in PRL. Nor will it get published if it is too similar to work that has been already been published by others.
The "novelty" criteria is much stricter in PRL than in most other journals (e.g. the other Physical Review journals will accept manuscripts that are "only" extentions or improvements of previously published results).

Hence, in order for a paper of this type to be accepted by PRL it would -even if it is correct- have to be based on a complettely new approach, which is somewhat unlikely considering how long Bell's theorem has been around.
That's certainly correct, although I have the impression that quite some similar papers in support of Bell's interpretation have been published. And that same originality argument is used by many other journals.
As it appears that I was not clear enough, here once more: because of selection criteria that have nothing to do with the validity of the contents, the number of papers isn't a reliable measure of the validity of the arguments contained in them.
 
Last edited:
  • #116
bhobba said:
Yea I get it. But what we really need, if Bells theorem is to be attacked, is a counter example like De Broglie-Bohm was to Von Neumann. Do you know of any? Without that I have a hard time believing there is an issue. And we also have Gleason's Theorem - it needs to evade that as well and be explicitly contextual. A tough ask.

Thanks
Bill
I'm looking into two approaches - the one is the Accardi approach already mentioned in this thread: try to disqualify the derivation by a counter derivation (now, did he or did he not??).

The other approach is a to come up with a simulation that does what is impossible according to Bell or Herbert; and I'm now examining a simulation that pretends to do just that. I'll present it here if it looks serious to me. As Nick Herbert's variant of Bell's theorem is the clearest of all, I'll try to modify it to exactly reflect Nick Herbert's presentation.
 
  • #117
harrylin said:
The other approach is a to come up with a simulation that does what is impossible according to Bell or Herbert; ...

The impact of Bell is that it tells us: "No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

You must have tried examples and seen for yourself WHY a simulation can't really do that. There are now hundreds if not thousands of Bell Inequalities to be considered as well! So coming up with a single simulation would not even come close to pointing the way to a theory. Don't confuse a simulation with a theory!

And no one can even hand pick a single example set that recreates the QM predictions without breaking the observer independence requirement of EPR - for even the most basic Bell Inequality.

Disproving Bell does not mean you find some error in his assumptions, definitions, logic, etc. It means showing us a physical theory of local hidden variables. If you want to know why papers are routinely rejected "disproving" Bell, it is because they completely ignore this element. It is a tough standard, but such is the scientific method.

The experimentalists developing new and more sophisticated entanglement experiments must be scratching their heads each time a new paper "proves" entanglement is an illusion (ie a spurious correlation).
 
  • #118
Quantumental said:
I Guess if QM and Space-Time is emergent as several independent thinkers suggest: Nima Arkani-Hamed et al., Gerard 't Hooft and most recently Lee Smolin et al. then you could get a deterministic interpretation that isn't non-local *or* local because Space itself isn't fundamental.
I find the idea that spacetime as emerging from a non-spatiotemporal structure (e.g. causal sets, etc.) interesting but there are authors that argue the idea of an emergent spacetime is incoherent:
While dynamical approaches to spacetime give priority to the dynamics over the geometry, it is not always clear in what sense they do so...Another, stronger sense of this idea that seems to be associated with dynamical approaches, is that not only the symmetries of the geometry but also the very concept of spacetime as a geometric structure can be fully derived from the dynamics without assuming geometric notions such as length or volume. We will argue that this stronger sense of deriving the geometry from the dynamics is untenable. At best one can show that the dynamics can be given a geometric interpretation. This, however, in our view does not amount to a strict derivation but rather to a sort of a consistency proof. Consistency proofs, however, are a far cry from ”derivations”, and so in this stronger sense it seems to us misleading to think about the geometry as ”emerging” from the dynamics alone.
The primacy of Geometry
http://mypage.iu.edu/~hagara/Geom.pdf

But I'm not sure I buy this argument. In some sense, I tend to agree with Gisin that non-locality itself may be suggesting to us that the universe at bottom, might not be "in" space and time. Other authors suggest the same thing:
that in some way spacetime as we find it in our existing theories is not a fundamental ingredient of the world, but instead, like rainbows, plants or people, `emerges' from some deeper, non-spatiotemporal physics.
The emergence of spacetime in quantum theories of gravity
http://philsci-archive.pitt.edu/9928/1/HuggettWuthrichIntro_06.pdf

I'm still confused, however, about the relationship between non-locality and spatio-temporality. Does non-locality itself imply non-spatio-temporality (as some like Gisin seem to imply) or does non-spatiotemporality preclude non-locality (that is, locality/non-locality make sense only if one assumes spatio-temporality)?
 
  • #119
Quantumental said:
I Guess if QM and Space-Time is emergent as several independent thinkers suggest: Nima Arkani-Hamed et al., Gerard 't Hooft and most recently Lee Smolin et al. then you could get a deterministic interpretation that isn't non-local *or* local because Space itself isn't fundamental.

It seems this is becoming the unavoidable conclusion for realists. We all know GR and QM is in trouble and the measurement problem has no satisfactory solution. Both may be solved by a deeper underlying theory.

Sure it seems radical, but no more radical than *any* other interpretation when you really think about it

as said gisin, order come from outside space-time.
 
  • #120
harrylin said:
Ehm no, it just means that "local realism" models cannot work if they model quantum states as reality itself instead of our knowledge of reality - that's basically the same issue. :smile:

"we show that for models wherein the quantum state has the status of something real, the failure of locality can be established through an argument considerably more straightforward than Bell’s theorem. [..] the same reasoning is present in Einstein’s preferred argument for incompleteness"

thanks to repeat the post, consequently pray for an epistemic view...



.
 

Similar threads

Replies
7
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 120 ·
5
Replies
120
Views
12K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 50 ·
2
Replies
50
Views
7K
Replies
80
Views
7K
  • · Replies 7 ·
Replies
7
Views
2K