Copenhagen: Restriction on knowledge or restriction on ontology?

In summary: But if they're genuinely random then they can't be observed, so they must exist in some sense outside of observation.
  • #246
Ontology. If we can't measure something in principle, then neither can nature.
 
Physics news on Phys.org
  • #247
DarMM said:
A short article from Griffiths:
https://arxiv.org/abs/1901.07050

I don't think that Griffiths' argument in section 2.2 is at all a fair rebuttal to the argument that EPR violates nonlocality. The recipe for working with quantum mechanics is:
  1. When you make a measurement, it gives an eigenvalue of the observable being measured with probabilities given by the Born rule.
  2. Afterward, the state will have "collapsed" to an eigenstate of the observable corresponding to the measured eigenvalue.
In an EPR-type experiment, the state is distributed over a spacelike region, so if this collapse is taken literally, then it would seem to imply a nonlocal process. The measurement here causes an instantaneous change there.

E, P, and R were arguing that this DOESN'T happen; that the measurement here doesn't affect the measurement over there. The measurement over there is only affected by local conditions over there (the hidden variables).

Bell's inequality shows that hidden variables of the sort that E, P, and R had in mind can't explain the nonlocal correlations in an EPR-type experiment. So there isn't a realistic alternative to the "collapse" model, which is nonlocal.

Griffiths' argument in section 2.2 seems to be completely missing the point. Yes, a collapse interpretation doesn't violate locality if you don't have a distributed state. But that was the whole point of the EPR argument---that the "collapse" interpretation seems to imply nonlocality if you have a distributed state.

I'm very unimpressed.
 
  • Like
Likes zonde
  • #248
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?

I really appreciate Demystifiers persistent questioning of our understanding of the foundations of QM even if the Bohmian perspective superficially is opposite to my stance, but I started to see common junctions in the perspectives, its just that sometimes the choice of words makes this sound very different.

If I may add another comparasion/abstraction here on the notion of and ontology beeing intuitive.

As per past discussions exists a common confusion/misunderstanding into what extent quantum predictions is "subjective" or "relative to a classical measurement device", and that has led people to confuse this subjectivity with HUMAN subjectivity or that human science may be subjective, leading further to brining in the notion of human brain into the foundations of QM. This naturally leads many physicis to react strongly against such jumbo.

Similarly, one can wonder what sense there is in brining in a concept of human intuition to rate physical ontologeis/theories? As I enjoy Demystifiers questions i do not attach so much attention to choice of words, but i rather see an abstraction and analogt to the information processing agent perspective that is MY intuitive picture, and the interesting thing is that they may relate like subjectivity and relativity, relate.

In my previous comment I compared the ontology to a retained compressed code the REPRESENTS the expectation model for future processes. And in this perspective, the ontology must be "natural" in the sense that "computations" are spontaneous are easy. Ie. if we consider the observers "calculation of expectations" according to some "mechanics of his ontological structure", in my view this calculation must be a spontaneous process. And of course I would have no problem to call such a property "intuitive". Intuitive means that its something that can be executed with minial effort! Ideally spontaneously.

In this way, the notion of intuition can be understood without brining in human concepts, neuroscience or psychology. It is simply an adjective that says that the "inference" from the ontology is natural. This is exactly how i see it as well, and I think its important, and rephrasing it like this, might make others too understand without beeing rejected by the choice of words.

As long as the foundational questions remain open research questions, we can not escape words. We also can not escape mathematics of course, but we need both. And ideas need both to be explained.

My only final comment is that I would say that this abstracted notion of "intuitive" is still subjective, but not subjective as in human-human subjective; its rather subjective related to the computing agent (think matter). And this can potentially be cast in terms of foundational physics; but the circularity here again I suggest we need to think in terms of evolution. The problem always start i think when you expect a static eternal starting point from where to construct everything, ignoring that the constructing process itsel needs structure for guidance. Its like matter and geometry. But we talk about ontolgoy and epistemology. The evolved ontolgoy is necessarily natural by definition. But even naturality has variation.

/Fredrik
 
  • #249
stevendaryl said:
I don't think that Griffiths' argument in section 2.2 is at all a fair rebuttal to the argument that EPR violates nonlocality
The original EPR argument doesn't violate nonlocality because it has a local model right?

stevendaryl said:
The measurement here causes an instantaneous change there
I think this is trivially avoided in Bohr and Neo-Copenhagen with the wavefunction being epistemic. The whole EPR argument can be repeated in Spekkens toy model or many other local hidden variable models. Similarly they have a collapse but nothing is nonlocal.

It's the CHSH inequalities that have a true implication, not EPR.

stevendaryl said:
Griffiths' argument in section 2.2 seems to be completely missing the point. Yes, a collapse interpretation doesn't violate locality if you don't have a distributed state
I think his argument is more in section 5.2 that nonlocality arises from presuming a common sample space/realism for the observables. It's alternate way out from nonlocality.

Of course whether one likes the "price" of what the lack of a common sample space implies is the issue, i.e. the absence of conventional realism.
 
Last edited:
  • #250
DarMM said:
The original EPR argument doesn't violate nonlocality because it has a local model right?

The original EPR argument was trying to show that QM doesn't violate locality. It failed because of Bell's argument.

I think this is trivially avoided in Bohr and Neo-Copenhagen with the wavefunction being epistemic.

No, being epistemic doesn't accomplish anything. Saying that ##\psi## is epistemic is to say that it isn't the reality, it's only a reflection of our information about that reality. So the issue then becomes: what is the reality, and is it nonlocal?

My point is that Griffiths' is misunderstanding the EPR argument if he thinks that showing that violations of Bell's inequality can be achieved for situations that are intrinsically local. That's got things backwards. The Bell argument isn't that nonlocality is required to violate his inequality. There is no problem with coming up with a causal model that violates it locally. The difficulty is coming up with a causal model that violates it for spacelike separated observables.
 
  • #251
Just to be clear, what ultimately is your argument in this thread. Is it that contrary to conventional wisdom that giving up conventional realism doesn't save locality and that QBism and Copenhagen are actually nonlocal?

I'm also still confused by this notion that contextuality is just "mathematical" and of no real foundational relevance. How do you square such a view with all of its consequences in Quantum Information alone?
 
  • #252
DarMM said:
Just to be clear, what ultimately is your argument in this thread. Is it that contrary to conventional wisdom that giving up conventional realism doesn't save locality and that QBism and Copenhagen are actually nonlocal?

I was specifically responding to the argument by Griffiths' in the link you posted. It seems to be based on a misunderstanding of the EPR and Bell arguments.

As far as what I personally think, it seems to me that there are only three possibilities that make any sense: (1) something nonlocal is going on, or (2) QM is wrong (or at least incomplete), or (3) something like Many-Worlds is true (in spite of what appears to be the case, multiple macroscopically different versions of the world can exist simultaneously).
 
  • #253
stevendaryl said:
As far as what I personally think, it seems to me that there are only three possibilities that make any sense: (1) something nonlocal is going on, or (2) QM is wrong (or at least incomplete), or (3) something like Many-Worlds is true (in spite of what appears to be the case, multiple macroscopically different versions of the world can exist simultaneously)
For what reason do you exclude the other ways out of Bell's theorem?
 
  • #254
DarMM said:
For what reason do you exclude the other ways out of Bell's theorem?

I can't make any sense of them.
 
  • #255
stevendaryl said:
I can't make any sense of them.
That seems strange to me as the retrocausal and acausal approaches do manage to replicate a good deal of QM where as MWI can't get out the Born rule so replicates no predictions at all. Ruth Kastner has even managed to get out some of QED. Not that I'm convinced by them either since there are many calculations they haven't replicated as of 2019.
 
  • #256
Actually, in my opinion, there is huge misunderstanding today in quantum mechanics. I can not see the difference between De Broglie's formula and Heisenberg's uncertainty principle. If someone understands that, then your question is answered!
 
  • #257
DarMM said:
That seems strange to me as the retrocausal and acausal approaches do manage to replicate a good deal of QM

I lump those in with being nonlocal. If you have effects traveling both forward and backward in time, then spacelike separations are no impediment. A spacelike separation is a combination of two timelike separations, if you allow both directions in time.

where as MWI can't get out the Born rule so replicates no predictions at all.

I agree that MWI has unresolved problems, but they are actually problems with QM in general, but they show up more explicitly in MWI.

When I say "Many-Worlds" I mean in general, theories in which measurements results and macroscopic variables don't have definite values, that you can have superpositions of macroscopically distinguishable states. To me, this is just a consequence of QM. There is nothing in QM that limits the size of system to which it applies. So it's actually inconsistent (what I've called a "soft inconsistency") to assume QM applies to everything and also that measurements have definite outcomes. So something like Many-Worlds is, to me, implied by QM.

But I certainly appreciate the problems with that: If measurements don't have definite outcomes, then it's tough to make sense of Born's rule.
 
  • #258
stevendaryl said:
I lump those in with being nonlocal. If you have effects traveling both forward and backward in time, then spacelike separations are no impediment. A spacelike separation is a combination of two timelike separations, if you allow both directions in time.
I can see why you'd think that but I don't think it is valid. The retrocausal views have no space like propagation and the acausal views have no propagation at all being involved in the violation of the CHSH inequalities.

When I say "Many-Worlds" I mean in general, theories in which measurements results and macroscopic variables don't have definite values, that you can have superpositions of macroscopically distinguishable states. To me, this is just a consequence of QM. There is nothing in QM that limits the size of system to which it applies. So it's actually inconsistent (what I've called a "soft inconsistency") to assume QM applies to everything and also that measurements have definite outcomes.
However despite people attempting to prove this is a contradiction nobody has produced a mathematical theorem to that effect, that was the purpose of Frauchiger-Renner and spin offs.

But I certainly appreciate the problems with that: If measurements don't have definite outcomes, then it's tough to make sense of Born's rule.
The Born rule is needed to derive the macroscopic worlds being classical at all, so the problems go quite deep.
 
  • #259
DarMM said:
I can see why you'd think that but I don't think it is valid. The retrocausal views have no space like propagation and the acausal views have no propagation at all being involved in the violation of the CHSH inequalities.

Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.

However despite people attempting to prove this is a contradiction nobody has produced a mathematical theorem to that effect, that was the purpose of Frauchiger-Renner and spin offs.

The foundations of QM are too fuzzy to derive a tight contradiction. That isn't a plus, in my opinion.

The Born rule is needed to derive the macroscopic worlds being classical at all, so the problems go quite deep.

Yes, the problems are very deep, and nobody has a clue. Except maybe the Bohmians.
 
  • Like
Likes Spinnor
  • #260
stevendaryl said:
The foundations of QM are too fuzzy to derive a tight contradiction. That isn't a plus, in my opinion
The Frauchiger-Renner arguments don't fail for fuzzy reasons though, but due to specific mathematical properties of the theory like intervention sensitivity.

stevendaryl said:
Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.
Certainly, but the acausal views don't have nonlocal interactions or nonlocal degrees of freedom so they simply are not nonlocal.
 
Last edited:
  • #261
atyy said:
I've read this now and a few of its references. It derives the same kind of trade off we've been discussing here, but in the language of causal networks. You recover locality, but at the cost of embedding the observer/the notion of agent into the theory.

So it's a choice between Nonlocality and Participatory realism as we had above.

It can be extended to include the other views with the alternate causal graphs that replicate the Bell inequalities as Pusey and Leifer do in their retrocausality paper (https://arxiv.org/abs/1607.07871).

So you can derive the same kind of result with causal graphs, multiple sample spaces, counterfactual indefiniteness, etc there are a few different ways of doing it.
 
  • Like
Likes atyy
  • #262
stevendaryl said:
My point is that Griffiths' is misunderstanding the EPR argument if he thinks that showing that violations of Bell's inequality can be achieved for situations that are intrinsically local. That's got things backwards. The Bell argument isn't that nonlocality is required to violate his inequality. There is no problem with coming up with a causal model that violates it locally. The difficulty is coming up with a causal model that violates it for spacelike separated observables.
I think Griffith's argument is a bit more than this in light of his Section 5.

What kind of causal models violate them locally?

I think this relates back to contextuality/counterfactual indefiniteness. By the Kochen Specker theorem a hidden variable theory has to be contextual. This removes the agential notions from the theory, but at the so far minor cost of contextuality.

However when you have spacelike seperation, the context is extended over a large region of spacetime and thus to "act in accordance with the context" it has to be nonlocal. Aravind has a good example in https://arxiv.org/abs/quant-ph/0701031 where you see an example of the link between contextuality and nonlocality. So Bell's theorem is sort of the spacelike version of the timelike Kochen-Specker. Cabello discusses this at the start of his paper: https://arxiv.org/abs/1801.06347

However all this happens only if you try to introduce the hidden variables. If you just swallow Participatory Realism there is no such context dependence and thus you don't end up with nonlocality.

Of course Participatory Realism is troubling for different reasons.
 
Last edited:
  • #263
Demystifier said:
The so called "Copenhagen" interpretation of QM, known also as "standard" or "orthodox" interpretation, which is really a wide class of related but different interpretations, is often formulated as a statement that some things cannot be known. For instance, one cannot know both position and momentum of the particle at the same time.
.
.
.
But on other hand, it is also not rare that one formulates such an interpretation as a statement that some things don't exist. For instance, position and momentum of the particle don't exist at the same time.

Which of those two formulations better describes the spirit of Copenhagen/standard/orthodox interpretations?
What you have described here is not Copenhagen. What you have written is very much closer to the notion of Complementarity (as promoted by Wilczek, et al). A litte more about this below, but to circle back to Copenhagen briefly,

Copenhagen consists of three main pillars.
1. There are things in the world called "observers" (which correspond to our intuitive notion of people/scientists/grad students).
2. There are events in the world called "measurements" (which correspond with our intuitive notion of measuring a system.)
3. It does not make sense to ask about a particle's position prior to measurement. In another thread you quoted Bohr's remark that QM is a theory about what is measured, not what is "out there" in the world.

Copenhagen was satisfactory in the early years of the theory, since no serious scientists was going to demand a formal definition of a human observer, or ask pathological questions about "measurement". Eventually, everyone did both things, giving rise to a pantheon of various modern interpretations. "observers" are also made of particles, and "measurement" is reformulated as some kind of information copying.

Contemporary notions of Complementarity use the verb "to know" for a specific reason. Any information about the system's state could be stored somewhere independent of the act of measurement (say in the RAM of a computer). It appears, for all intents, that the mere possibility of an observer being able to retreive this information is enough, by itself, to remove both interference and destroy entanglement. The situation would be far more palatable if the act of measuring was the skeleton key to destroy superposition. Complementarity says this is far more subtle. The mere possibility of that information leaking into the environment will do the trick. The universe conspires to disallow you to know.

Demystifier said:
To be sure, adherents of such interpretations often say that those restrictions refer to knowledge, without saying explicitly that those restrictions refer also to existence (ontology).
Actually, Quantum Bayesianists would explicity state that the restrictions do not apply to ontology. (I'm not an advocate myself) but they would say that any and all of these restrictions derive from the observer's knowledge.

Demystifier said:
Moreover, some of them say explicitly that things do exist even when we don't know it. But in my opinion, those who say so are often inconsistent with other things they say. In particular, they typically say that Nature is local despite the Bell theorem, which is inconsistent. It is inconsistent because the Bell theorem says that if something (ontology, reality, or whatever one calls it) exists, then this thing that exists obeys non-local laws. So one cannot avoid non-locality by saying that something is not known. Non-locality implied by the Bell theorem can only be avoided by assuming that something doesn't exist. Hence any version of Copenhagen/standard/orthodox interpretation that insists that Nature is local must insist that this interpretation puts a severe restriction on the existence of something, and not merely on the possibility to know something.
I am not sure you have characterized Bell's Theorem correctly here, but I wanted to say something else.

Modern physics separated itself from classical notions of ontology around the years of the EPR debates. We have, as a civilization a tools called QM and QFT, and those tools standing innocently on their own cannot identify what Einstein called objective "elements of reality". If the position of a particle is not an element-of-reality, then what is? Perhaps the objective element-of-reality we seek is the Quantum State (PBR Theorem). Or perhaps it is the Wave Function as some kind of extended wave ( as advocates of Many Worlds claim).

You will undoubtedly detect what you call "inconsistencies" in what people write about these topics. English has limits and it will fray at the edges. While the formal tools cannot identify elements of reality, the people who use them definitely think they can. These inconsistencies are unlikely to go away soon.
 
  • Like
Likes *now*, DarMM and dextercioby
  • #264
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?
The point of a non-intuitive interpretation is to change one's intuition to match the physics. The ether was an intuitive interprettion, but there was no physics in an ether theory and the intuition it imparted leada to incorrect physics without adding more baggage to eliminate it. Similaly in quntum mechanics, a limitation on what nature can specify contains real physics. Trying to evade it (e.g., bohmian mechanics) should lead to different physical predictions. If it does not, then it's only intuitive to the extent that the intuition it provides is wrong and actually elimintes the real physics. The only difference between an intuitive interpretation and a non-intuitive one is that intuition sometimes follows from having to accept new physics that has no counter part in everyday intuition to analogize.
 
  • #265
hyksos said:
Perhaps the objective element-of-reality we seek is the Quantum State (PBR Theorem)
Just to expand on this, in my understanding the PBR theorem says if you want a theory obeying the ontological models framework's axioms to match quantum mechanical predictions and preparation independence then the state space of the theory has to include the wavefunction.

So one can either accept the wavefunction as real or reject the ontological models framework's axioms.
 
  • #266
stevendaryl said:
Well, for the sake of categorization, there are two rough categories: (1) normal causality, in which the future is affected by the past lightcone, and (2) everything else.
To me, non-local is not the same category as retrocausal, precisely because the former need have no causality outside the past light cone (nor back in time influences), while retrocausal explicitly does.

Consider the events E1: preparation of entangled particles; E2: choice of a measurement of particle 1, and its result; E3 choice of a measurement of particle 2 and its result; E4: observation of the results of E2 and E3.

The physics at E2 is wholly unaffected by E3, even in principle. Similarly for E3 viz E2.

Nonlocality only enters into observations (physics) at E4, whence E1, E2, and E3 are all in its past light cone. The nonlocal aspect is simply that modeling the correlations observable at a sequence of E4s involves both E2 and E3 information, and that model cannot treat E2 and E3 as inpependent (nor determined by state at E1). Note that immediately after E2 you cannot predict what will be seen at E4 because you have no idea what measurement will be taken at E3.

[edit: that is, the correlations seen at E4 are caused by preparation at E1, choice of measurement at E2, and choice of measurement at E3. No part of this is outside normal SR causal structure.]
 
Last edited:
  • #267
Demystifier said:
What evidence do we have for the claim that the real stuff cannot be described mathematically?
How would anybody be able to determine this one way or the other?
 
  • #268
PAllen said:
To me, non-local is not the same category as retrocausal, precisely because the former need have no causality outside the past light cone (nor back in time influences), while retrocausal explicitly does.

Unfortunately, that reasoning is flawed. If events are spacelike seperated, then a lorentz transformation can make them simultaneous or make either event occur before the other. So, if you assume one event causes the other, you can make a lorentz transform to a frame in which that is false, which requires the event you assumed was the cause to act retrocausally. That is what non-local means.
 
  • #269
bobob said:
Unfortunately, that reasoning is flawed. If events are spacelike seperated, then a lorentz transformation can make them simultaneous or make either event occur before the other. So, if you assume one event causes the other, you can make a lorentz transform to a frame in which that is false, which requires the event you assumed was the cause to act retrocausally. That is what non-local means.
This isn't true if retrocausal influences occur only within the light cone.

And it is certainly true in acausal theories for which there is no physical propogation, but imply a constraint on the 4D history.
 
  • Like
Likes dextercioby
  • #270
bohm2 said:
How would anybody be able to determine this one way or the other?
The argument is essentially the failure of hidden variable theories and how fine-tuned they have to be. However like most interpretative arguments it's not definitive.
 
  • #271
DarMM said:
This isn't true if retrocausal influences occur only within the light cone.

(1) Huh? If you have spacelike separated events, the events cannot be time ordered. Therefore, the time odering is frame dependent and in particular can be made simultaneous. Neither can be the cause of the other. Neither is in the others light cone.

(2) Events which are within each other's light cone are, by definition, time ordered. If there is a cause and effect, which is the cause and which is the effect are defined by their time ordering. Retrocausal in this case would mean closed timelike world line, since all of the world lines connecting causes and effects are timelike.

To state otherwise is to flat out say relativity is incorrect.
 
  • #272
DarMM said:
The argument is essentially the failure of hidden variable theories and how fine-tuned they have to be. However like most interpretative arguments it's not definitive.

Hidden variables must be in some sense "fine tuned" as proved by Wood and Spekkens. However, they also argue that some forms of fine tuning are more acceptable than others.

https://arxiv.org/abs/1208.4119"Valentini’s version of the deBroglie-Bohm interpretation makes this fact particularly clear. In Refs. [24, 25] he has noted that the wavefunction plays a dual role in the deBroglie-Bohm interpretation. On the one hand, it is part of the ontology, a pilot wave that dictates the dynamics of the system’s configuration (the positions of the particles in thenonrelativistic theory). On the other hand, the wavefunction has a statistical character, specifying the distribution over the system’s configurations. In order to eliminate this dual role, Valentini suggests that the wavefunction is only a pilot wave and that any distribution over the configurations should be allowed as the initial condition. It is argued that one can still recover the standard distribution of configurations on a coarse-grained scale as a result of dynamical evolution [26]. Within this approach, the no-signalling constraint is a feature of a special equilibrium distribution. The tension between Bell inequality violations and no-signalling is resolved by abandoning the latter as a fundamental feature of the world and asserting that it only holds as a contingent feature. The fine-tuning is explained as the consequence of equilibration. (It has also been noted in the causal model literature that equilibration phenomena might account for fine-tuning of causal parameters [27].)

Conversely, the version of the deBroglie-Bohm interpretation espoused by Durr, Goldstein and Zhangi [28] – which takes no-signalling to be a non-contingent feature of the theory – does not seek to provide a dynamical explanation of the fine-tuning. Consequently, it seems fair to say that the fine-tuning required by the deBroglie-Bohm interpretationis less objectionable in Valentini’s version of the theory. On the other hand, the cost of justifying the fine-tuning by adynamical process of equilibration is that, because true equilibrium is an idealization that is never achieved in finite time, one would expect systems to have small deviations from equilibrium and such deviations could in principle be exploited to send signals superluminally. Valentini endorses this consequence of his version of the deBroglie-Bohminterpretation [29] and indeed has made proposals for where the strongest deviations from equilibrium might arise [30].Therefore, anyone who thinks that the absence of superluminal signals is a necessary, rather than a contingent, featureof quantum theory, will not be enthusiastic about Valentini’s approach. "
 
  • Like
Likes DarMM
  • #273
bobob said:
(1) Huh? If you have spacelike separated events, the events cannot be time ordered. Therefore, the time odering is frame dependent and in particular can be made simultaneous. Neither can be the cause of the other. Neither is in the others light cone.
Yes, but retrocausal theories don't have propogation at spacelike distances. Only at timelike distances. They simply don't have spacelike propogation so there is no point arguing about it. This is exactly the feature that makes them different from nonlocal theories.

bobob said:
(2) Events which are within each other's light cone are, by definition, time ordered. If there is a cause and effect, which is the cause and which is the effect are defined by their time ordering. Retrocausal in this case would mean closed timelike world line, since all of the world lines connecting causes and effects are timelike.

To state otherwise is to flat out say relativity is incorrect.
The part in bold is not necessarily true. Retrocausal theories (see Ruth Kastner's papers and books for an example) have events in both directions of the light cone causing the each other in a sense and yet preserving Relativity.
 
  • #274
atyy said:
Hidden variables must be in some sense "fine tuned" as proved by Wood and Spekkens. However, they also argue that some forms of fine tuning are more acceptable than others
I agree with their statements on this. I think the best version of hidden variable theories are ones where the experimental imprecision required to mask signalling is a result of some kind of thermalization or equilibrium process.

Hopefully the occurance of this thermalization can be proven in both cases (or better yet from a scientific perspective just one).

If I remember right Matthew Leifer is looking at retrocausal theories that thermalize into Quantum Mechanics.

On a personal note I think such thermalization/equilibrium hidden variable theories might be the last "hope" for a realist account of subatomic physics. If they are proven to fail (i.e. it is demonstrated such thermalization cannot occur) I think we are driven to Copenhagen or QBism. I get the impression people like Robert Spekkens and Matthew Leifer have similar views, though perhaps not.
 
  • #275
DarMM said:
On a personal note I think such thermalization/equilibrium hidden variable theories might be the last "hope" for a realist account of subatomic physics. If they are proven to fail (i.e. it is demonstrated such thermalization cannot occur) I think we are driven to Copenhagen or QBism. I get the impression people like Robert Spekkens and Matthew Leifer have similar views, though perhaps not.

If we are driven to "pure" Copenhagen that will be just fine with me. I have always hated how Bohmian Mechanics takes all the mystery out of QM o0)
 
  • Haha
Likes Demystifier and DarMM
  • #276
atyy said:
It is argued [by Valentini] that one can still recover the standard distribution of configurations on a coarse-grained scale as a result of dynamical evolution
DarMM said:
I think such thermalization/equilibrium hidden variable theories might be the last "hope" for a realist account of subatomic physics. If they are proven to fail (i.e. it is demonstrated such thermalization cannot occur)
Valentini's argument is faulty; see this post. Much improved arguments would be needed to prove thermalization.
 
  • #277
bobob said:
Unfortunately, that reasoning is flawed. If events are spacelike seperated, then a lorentz transformation can make them simultaneous or make either event occur before the other. So, if you assume one event causes the other, you can make a lorentz transform to a frame in which that is false, which requires the event you assumed was the cause to act retrocausally. That is what non-local means.
I assume there there is no causal relationship between E2 and E3 as I defined them, and this was explicitly stated in my post.

The whole point of my post was to argue against lumping together retrocausal models with pure nonlocal models. Others have clarified here some further distinctions and terminology I hadn’t been familiar with. Thus, I learn that my mental model of a pure nonlocal model seems best described by what is called acausal, which emphasizes the lack of causal relation between E2 and E3.

This was all purely in response to @stevendaryl’s classification. I really like his whole argument through this thread, and just was arguing not lumping together acausal and retrocausal.
 
Last edited:
  • #278
A. Neumaier said:
Valentini's argument is faulty; see this post. Much improved arguments would be needed to prove thermalization.
I fully agree, I don't think the required thermalization/equilibrium process has been demonstrated either in nonlocal or retrocausal theories.
 
  • #279
For me, this has to do with determinism and the very definition of "observation".

In the broadest possible meaning, observation is an interaction, an exchange of information between the phenomena and the observer. In that sense, there are some phenomena which cannot be observed, for instance because it doesn't interact, because it is beyond a causal horizon or because it's in the future.

Events that are observed acquire some value (or set of values) and become part of a deterministic space, for those which are not observed (or not yet observed) you can assign a probability, they are part of a phase space. For instance, a prediction of some future event exists only as a probability until the event occur and it's measured.

Furthermore, every observable and observed event exists in the past. Then the past can be viewed as deterministic and the future (plus regions behind horizons) is stochastic.
 
  • #280
jocarren said:
Then the past can be viewed as deterministic
But the past cannot be observed either, and becomes more and more uncertain as one goes back in time.
 

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
  • Quantum Interpretations and Foundations
7
Replies
226
Views
18K
  • Quantum Interpretations and Foundations
4
Replies
133
Views
7K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
4K
  • Quantum Interpretations and Foundations
Replies
21
Views
2K
  • Quantum Interpretations and Foundations
Replies
21
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
54
Views
3K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
Back
Top