Do Bell and PBR together point toward nonlocal reality?

  • #51
Ilja said:
Time-symmetric interpretations, with causal influences into the past, are interpretations for those who like science fiction and mystics. There is not a single bit of empirical evidence in favour of causal influences from future into the past.

We have, of course, very strong evidence against Einstein causality. It is not possible to give any realistic interpretation of violations of Bell's inequality compatible with Einstein causality. So it has to be given up. But that means we have to go back to classical causality, and there is no reason to go into the direction of sci-fi mystics of causal influences into the past.

I think there is disagreement about what is suggested by the empirical evidence. There is no evidence in favor of there being a direction of time in the laws of physics. There is no evidence of any breakdown in (local) Lorentz invariance of physics. So there is no empirical evidence in favor of the program you suggest, which is to give up Einstein causality in favor of a time-asymmetric, non-Lorentz-invariant theory.

Having said that, I don't think anyone needs empirical justification for exploring an idea. In the early stages of developing a theory, it's basically like brainstorming, nothing should be considered too far-out. May a thousand flowers bloom--or rather, may a thousand flowers be planted in the hopes that maybe one will bloom.

It definitely isn't scientific to criticize an approach based on the fact that it sounds silly. That's very subjective.
 
Physics news on Phys.org
  • #52
Demystifier said:
It is a matter of textbook QM that all position operators (at a given time) commute. Therefore, knowing all particle positions (at a given time) does NOT include non-commuting observables.

But if you know the position of a particle at all times, then you know the velocity at all times (well, if the position is a differentiable function of time). Yet position and velocity are non-commuting.
 
  • #53
DrChinese said:
dBB, as I understand it (and I am probably missing something on this point) says that all observables would be predictable with certainty if you only knew all relevant particle positions (which are in turn unknowable). That would include non-commuting observables.

On the other hand: PBR, as I understand it (and I am probably missing something on this point as well) says that non-commuting observables cannot both have well-defined values at all times.

I know this is an over-simplification. And I realize that PBR requires some assumptions that allow an "escape" for Bohmian class theories. My point is that the general tenor of these approaches are contradictory, even though there may be an escape. So we will need to see how these issues shake, if any.

Could you post a concise statement of PBR, or a link to such a statement? I remember reading the paper and yawning, because it didn't seem like it said anything that I didn't already know (or suspect).

[edit]Never mind, I found a good discussion here:
http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/
 
  • #54
Ilja said:
That dBB is free of contradictions can be easily seen looking at Bohm's original theory, because the theory is defined there completely. All the equations are there. The equivalence of dBB in quantum equlibrium to standard QT is a triviality, so if you think QT is free of contradiction, there is not much room for believing that dBB is contradictory.

I don't agree that it is a triviality that dBB is equivalent to standard quantum theory. Maybe the phrase "quantum equilibrium" works to address my worries, but it seems that standard quantum mechanics applies to tiny systems such as singe molecules, where the notion of "equilibrium" is ill-defined.

Take an example of a single particle in some kind of potential well. The standard quantum approach is that the wave function gives a probability distribution on positions of the particle, and if you perform a measurement to find out the particle's location, then the wave function collapses to something sharply peaked at the observed location. A second measurement will have probabilities given by the collapsed wave function, not the original.

But in the Bohm model, the particle always has a definite position. So what is the relationship between the wave function and the particle's position? When you detect the particle at some location, does the wave function collapse to a sharply peaked one? If so, what is the mechanism for this? Presumably, this means that there is an interaction between the detector and the wave function, but such an interaction goes beyond ordinary quantum mechanics, it seems to me. I don't see that they are equivalent.

The usual argument for the equivalence of Bohm's model and standard quantum mechanics is for an ensemble of many particles with the same wave function. In such a scenario, the effect of detecting a single particle is negligible, and so there is not a big error introduced by using the same wave function as before.
 
  • #55
stevendaryl said:
I think there is disagreement about what is suggested by the empirical evidence. There is no evidence in favor of there being a direction of time in the laws of physics. There is no evidence of any breakdown in (local) Lorentz invariance of physics. So there is no empirical evidence in favor of the program you suggest, which is to give up Einstein causality in favor of a time-asymmetric, non-Lorentz-invariant theory.
I would personally prefer to change a little bit into the direction of the past, when I was younger and more healthy. Unfortunately, I cannot do this, and all the empirical evidence I have suggests that this is simply impossible. So empirical evidence strongly suggests that there is no time symmetry.

Thus, I conclude that there is something something wrong with the time symmetry of our fundamental theories. By the way, the collapse in the Copenhagen interpretation as well as the development toward quantum equilibrium in dBB theory are not time symmetric, so that the fundamental theory is less time-symmetric as usually presented.

Ok, I agree, to introduce hidden objects which break a symmetry in a situation where we have not yet observed any violation of this symmetry is not nice. That means, one needs serious reasons. But there are very serious reasons - all one has to do to see this is to look at the alternatives.

The alternative is giving up realism. That's more than a nice word. It means, if taken seriously, giving up science. Ok, nobody takes it seriously, so we will continue to apply realism as usual, in all the domains where science has been already successful. But, sorry, if it would be a good idea, we should apply it everywhere, that means, to reject realism everywhere. If this is not a good idea, then to give it up in fundamental physics only is, may be, not a good idea too.

More fundamental theories often have different symmetry groups. Thus, to think that the symmetry group of actual theory survives is an idea certainly worth to try, but not more, it is clearly not a necessity, or something having a fundamental connection with the scientific method itself. So, giving up a particular symmetry group - especially in a situation where the two most fundamental theories we have have different symmetry groups - is not problematic.

But giving up realism is something completely different.
 
  • #56
Ilja said:
I would personally prefer to change a little bit into the direction of the past, when I was younger and more healthy. Unfortunately, I cannot do this, and all the empirical evidence I have suggests that this is simply impossible. So empirical evidence strongly suggests that there is no time symmetry.

Logically, the argument "The lack of evidence for X implies evidence against Y" requires you to establish that if Y were true, then X would follow. In the case X = changing the past, Y = time symmetric physics, it doesn't work. If you actually look at the mechanism for causality, you will see that it is ultimately about boundary conditions, not about the directionality of the laws of physics. Causality propagates in the direction of increasing entropy.

Now, there is a central mystery about cosmology, which is: Why was the entropy of the early universe so low? It's possible that new physics will be required to explain this, and that that new physics might be time-asymmetric. But for non-cosmological physics, involving small regions of spacetime, there is no need for time-asymmetric laws of physics in order to understand the asymmetry in causality.
 
  • #57
Ilja said:
Ok, I agree, to introduce hidden objects which break a symmetry in a situation where we have not yet observed any violation of this symmetry is not nice. That means, one needs serious reasons. But there are very serious reasons - all one has to do to see this is to look at the alternatives.

The alternative is giving up realism.

I think that it's important to distinguish between anti-realism in the form of solipsism--"Nothing exists other than my perceptions, and theories of physics are just ways of describing regularities in those perceptions"--and in the form of the conclusion that reality is very different from how it appears. A Many-Worlds type model is very different from the reality that we perceive, but it's not throwing away the idea of reality.

There is a philosophical issue, here, which is to what extent should a theory of physics be as close as possible to what we directly observe. It certainly is logically possible that there could be a huge gap between the two.
 
  • #58
Demystifier said:
dBB is still analogous to classical statistical mechanics. But the point is that it is analogous to statistical mechanics of particles in some external potential. In classical mechanics not only particles are real, but the potential is real as well. The role of potential is somewhat different from that of the particles, which is why you can call it nomological rather than ontological. But if you define the notion of reality in the PBR sense, then the potential is real, and not merely epistemic.
Thanks, that's a nice point of view. I still don't understand something: both the quantum potential and the probabilities are derived from the wave function. The wave function and the potential are regarded as ontic in the PBR sense. So where does the epistemicity -which is reflected by the probabilities- come from? Or speaking in terms of classical mechanics: we seem to have an equation for a state of knowledge ρ which describes the motion of some particles in an ontic potential V(ρ). This is hard to reconcile for me.
 
  • #59
stevendaryl said:
If you actually look at the mechanism for causality, you will see that it is ultimately about boundary conditions, not about the directionality of the laws of physics. Causality propagates in the direction of increasing entropy.
Unless we can derive the second law from the time-symmetric laws, we have to add it as fundamental (directional) law by its own right. I still haven't wrapped my head around how such a derivation could look like (Demystifier has written an interesting paper with different local arrows of time, but I haven't had the time to read it in detail).
 
  • #60
kith said:
Unless we can derive the second law from the time-symmetric laws, we have to add it as fundamental (directional) law by its own right. I still haven't wrapped my head around how such a derivation could look like (Demystifier has written an interesting paper with different local arrows of time, but I haven't had the time to read it in detail).

No, you absolutely do not have to add the second law as a fundamental law. There is no need for it, since the time-symmetric laws of physics are overwhelmingly likely to evolve a low entropy state into a higher entropy state. The thing that you may have to add by hand as unexplained additional assumption is that the universe started out in an extremely low entropy state.
 
  • #61
stevendaryl said:
Take an example of a single particle in some kind of potential well. The standard quantum approach is that the wave function gives a probability distribution on positions of the particle, and if you perform a measurement to find out the particle's location, then the wave function collapses to something sharply peaked at the observed location. A second measurement will have probabilities given by the collapsed wave function, not the original.

But in the Bohm model, the particle always has a definite position. So what is the relationship between the wave function and the particle's position? When you detect the particle at some location, does the wave function collapse to a sharply peaked one? If so, what is the mechanism for this? Presumably, this means that there is an interaction between the detector and the wave function, but such an interaction goes beyond ordinary quantum mechanics, it seems to me. I don't see that they are equivalent.

The relationship between wave function and configuration is that the configuration q(t) follows the guiding equation defined by the wave function.

There is an interaction between the detector and the particle, and this interaction has to be described by the dBB theory for the whole system. This may be impossible in reality but is unproblematic conceptually. There is a wave function of the combined system \Psi(q_{sys},q_{env},t), which follows a Schrödinger equation, and configurations q_{sys}(t), q_{env}(t) which follow the guiding equation. The point is that there is also an effective wave function of the system, which is equivalent whenever there is no interaction between the system and the environment, and it is defined simply by
\psi(q_{sys},t)=\Psi(q_{sys},q_{env}(t),t). But during the interaction, the effective wave function does not follow the Schrödinger equation for the system alone. Instead, its evolution describes the collapse of the wave function. The final result of this process depends on the configuration of the measurement device q_{env}(t_{fin}), or, in other words, on the measurement result which we see.

In some sense, this goes beyond QM, indeed. QM does not describe the measurement process. But all what QM tells us is recovered. The wave function collapses, the resulting effective wave function of the system is uniquely defined by the result of the measurement. The resulting probabilities can be computed correctly, using the same QM formulas, if one assumes that the initial state of the whole system is \Psi(q_{sys},q_{env})=\psi(q_{sys})\psi_{env}(q_{env}) and that they are all in quantum equilibrium.
 
  • #62
Ilja said:
There is an interaction between the detector and the particle, and this interaction has to be described by the dBB theory for the whole system. This may be impossible in reality but is unproblematic conceptually. There is a wave function of the combined system \Psi(q_{sys},q_{env},t), which follows a Schrödinger equation, and configurations q_{sys}(t), q_{env}(t) which follow the guiding equation.

I understand that, but without at least argument showing that the interaction of the detector with the wave function will cause an apparent collapse of the (effective single-particle) wave function to a sharply peaked delta-function, I don't think that you can say with certainty that the Bohm approach is empirically equivalent to the usual approach.
 
  • #63
kith said:
The wave function and the potential are regarded as ontic in the PBR sense. So where does the epistemicity -which is reflected by the probabilities- come from? Or speaking in terms of classical mechanics: we seem to have an equation for a state of knowledge ρ which describes the motion of some particles in an ontic potential V(ρ).

The state of knowledge is introduced by the notion of quantum equilibrium.

You have a bottle - ontic. You put some water into the bottle. It can move now in the bottle in a quite arbitrary way. But you have the somehow preferred state where the water is in "equilibrium", at rest. This equilibrium is clearly defined by the form of the bottle.

In a similar way, an epistemic probability distribution ρ(q) can be arbitrary, and the dBB equations tell us how it changes in time. But there is a special probability distribution - the quantum equilibrium \rho(q)=|\psi(q)|^2 - which is preferred: Once it is initially in this equilibrium it remains there.
 
  • #64
stevendaryl said:
No, you absolutely do not have to add the second law as a fundamental law. There is no need for it, since the time-symmetric laws of physics are overwhelmingly likely to evolve a low entropy state into a higher entropy state.
Neither the Liouville nor the von Neumann equation evolve a low entropy state to a higher entropy state. So what time-symmetric laws are you referring to?
 
  • #65
stevendaryl said:
I understand that, but without at least argument showing that the interaction of the detector with the wave function will cause an apparent collapse of the (effective single-particle) wave function to a sharply peaked delta-function, I don't think that you can say with certainty that the Bohm approach is empirically equivalent to the usual approach.

Of course you will not obtain δ-functions if you consider a realistic measurement process with finite energy. But any realistic description of measurements in QM has the same problem.

What is usually done in QM is to consider measurements as interactions such that \psi_s\psi_e \to \sum \psi_s^i\psi_e^i. If now the \psi_e^i(q_e) are interpreted as macroscopic states of the measurement device after the measurement, then this can be clearly translated into the condition that the \psi_e^i(q_e) don't overlap so that if we know the q_e we can also uniquely identify the corresponding value i, the measurement result, because \psi_e^j(q_e)\approx 0 for all other j. But then \sum_j \psi_s^j\psi_e^i \approx \psi^i_s\psi^i_e.
 
  • #66
Thanks, Ilja. I think I get the main idea of the analogy now.

Ilja said:
But there is a special probability distribution - the quantum equilibrium \rho(q)=|\psi(q)|^2 - which is preferred: Once it is initially in this equilibrium it remains there.
It seems a bit strange to me that the equilibrium probability distribution follows changes in the potential instantaneously.
 
Last edited:
  • #67
kith said:
Unless we can derive the second law from the time-symmetric laws, we have to add it as fundamental (directional) law by its own right. I still haven't wrapped my head around how such a derivation could look like (Demystifier has written an interesting paper with different local arrows of time, but I haven't had the time to read it in detail).

I might point out that entropy can increase from "now" in both time directions. Obviously most lab situations are special cases in which entropy is made to be unusually lower than the surroundings. If you sampled the entropy of a typical environment, in thermal equilibrium: wouldn't you expect it to be at a local minimum? Ie the number of states it could have evolved from and the number of states it can evolve towards are both greater than now? That would be the statistical view, I believe. In a film of that, I do not believe you could discern its direction as forward or backward in any way (in contrast to the usual idea of a film of a glass breaking being an example of the time direction being obvious).

Or alternately, think of decoherence: entanglement disperses and decreases as you go forward in time. Does that require a fundamental time asymmetric law to describe as well?
 
  • #68
kith said:
Neither the Liouville nor the von Neumann equation evolve a low entropy state to a higher entropy state. So what time-symmetric laws are you referring to?

You have to be a little careful about what you mean by "entropy" when talking about the second law. In the case of both classical phase space and quantum wave functions, there is a notion of "entropy" that is unchanged by the evolution equations. But that is not the kind of entropy that we observe to always increase. When a quantity of gas expands rapidly to fill a vacuum, that's an irreversible process, even though the volume in phase space (which is what is preserved under Liouville equations) remains constant. We don't observe phase space volumes, we observe that gas expands to fill a vacuum, and it never happens that all the gas in a container spontaneously gathers into a small volume, leaving vacuum behind.

The kind of entropy that we observe to increase is coarse-grain entropy. Roughly speaking, the coarse-grain entropy of a state is the log of the number of microscopic states that "look the same" under the coarse-graining. Time-symmetric laws don't imply that this notion of entropy is constant.
 
  • #69
stevendaryl said:
I think that it's important to distinguish between anti-realism in the form of solipsism--"Nothing exists other than my perceptions, and theories of physics are just ways of describing regularities in those perceptions"--and in the form of the conclusion that reality is very different from how it appears. A Many-Worlds type model is very different from the reality that we perceive, but it's not throwing away the idea of reality.
I know that one can play around a lot with different notions of realism and confuse people. Especially if one introduces interpretations which are not well-defined like many worlds (it presupposes a fundamental decomposition of the universe into systems without any base, and I have not seen a satisfactory derivation of the Born rule yet).

And I have no problem if someone tries the hard job of developing a weaker notion of realism which is nonetheless powerful enough to be comparable with the common sense realism but compatible with Einstein causality and the violation of Bells inequality. I doubt this is possible, but who knows.

My point is that there is a simple and straightforward alternative - the quite trivial assumption that the symmetry group of subquantum theory is different from that of quantum theory, or that of quantum gravity different from that of classical gravity. No need to change a single bit in fundamental notions of realism and causality. This is the simple, easy way out of the violation of Bell's inequality, and of a lot of other problems too.

But it is essentially forbidden. String theory publishes thousands of articles without a single empirical prediction. Based on the alternative approach, you can be happy if you succeed to publish a single paper, if you succeed to derive the whole particle content of the standard model from simple principles applied to a quite simple model (arXiv:0908.0591), and you can be sure that nobody even looks at it - once this horrible approach requires a preferred frame.

There is a philosophical issue, here, which is to what extent should a theory of physics be as close as possible to what we directly observe. It certainly is logically possible that there could be a huge gap between the two.
Logically possible is almost everything. So that's not a point. Of course, a theory closer to what we directly observe should be preferable, the question is if the competitor has other advantages.

Logically, the argument "The lack of evidence for X implies evidence against Y" requires you to establish that if Y were true, then X would follow. In the case X = changing the past, Y = time symmetric physics, it doesn't work. If you actually look at the mechanism for causality, you will see that it is ultimately about boundary conditions, not about the directionality of the laws of physics. Causality propagates in the direction of increasing entropy.
My point was not a logical proof, but that there is strong empirical evidence that there is no time symmetry in nature. That there is an animal named mechanism for causality is new to me, as far as I know causality is fundamental, assumed as given from the start. But, I guess, we think about different things named "causality".
 
  • #70
DrChinese said:
If you sampled the entropy of a typical environment, in thermal equilibrium: wouldn't you expect it to be at a local minimum?
I'm not sure I understand your post correctly. With environment you mean something like a cold vacuum in an experimental chamber which encases the system of interest?

DrChinese said:
In a film of that, I do not believe you could discern its direction as forward or backward in any way.
So this film would show the evacuation of the chamber before the experiment and the flooding with air afterwards?

DrChinese said:
Or alternately, think of decoherence: entanglement disperses and decreases as you go forward in time. Does that require a fundamental time asymmetric law to describe as well?
Obviously, I don't doubt that entanglement gets destroyed by decoherence and that a gas expands. ;-) I just don't really understand how these processes are derived from the time-symmetric laws.

stevendaryl said:
The kind of entropy that we observe to increase is coarse-grain entropy. Roughly speaking, the coarse-grain entropy of a state is the log of the number of microscopic states that "look the same" under the coarse-graining.
I never really got this distinction. The Liouville equation leaves the Shannon entropy of the probability distribution constant. What you call the coarse-grain entropy is also called the Boltzmann entropy and I thought it was equivalent to the Shannon entropy. Maybe that's a misconception?

Puh, I think this really leads off topic.
 
Last edited:
  • #71
stevendaryl said:
Could you post a concise statement of PBR, or a link to such a statement? I remember reading the paper and yawning, because it didn't seem like it said anything that I didn't already know (or suspect).

[edit]Never mind, I found a good discussion here:
http://mattleifer.info/2011/11/20/can-the-quantum-state-be-interpreted-statistically/

That is the same link I posted above. :smile:

The issue about PBR and dBB vis a vis that link is: can 2 dBB wave functions overlap as shown in the Probability Density diagram? To quote: "... the question is: should we think of it as an ontic state (more like a phase space point), an epistemic state (more like a probability distribution), or something else entirely?"

The idea being that if the dBB wave function is sharply defined (as I think Ilja is saying), there can be no overlap. But that in turn is in contradiction to statistical spread from our unknown initial conditions. So I think if the dBB pilot wave is to be considered real: then there is no spread of values, there are hidden variables, there is non-local determinism and QM is incomplete. While PBR would say that if there are hidden variables, there must be a spread of outcomes for a particular wave state, and there will be overlap (therefore placing the theory in Group 1 and being prohibited).

I realize to the Bohmian, they see PBR as either neutral or a plus for their position. But I see it as either neutral or a negative for their position. As more and more elements of dBB are developed and declared, I think there are more and more opportunities for Bohmian class theories to run afoul of PBR in a fashion that they would not with Bell.

In other words: I agree with you that demonstrating the equivalency of QM and Bohmian class theories is not trivial. I think the idea that Bohmian theories *automatically* reproduce all QM predictions is unjustified. Logically, there must be a lot of ways to formulate the interaction effects of particle positions - and they can't all be equivalent (and be equivalent to QM at the same time). The very fact that there are multiple versions of dBB would imply that as well. Again, I cannot say *exactly* what is wrong with the Bohmian reasoning on this, but it certainly raises a lot of questions in my mind.
 
  • #72
DrChinese said:
I might point out that entropy can increase from "now" in both time directions. Obviously most lab situations are special cases in which entropy is made to be unusually lower than the surroundings. If you sampled the entropy of a typical environment, in thermal equilibrium: wouldn't you expect it to be at a local minimum? Ie the number of states it could have evolved from and the number of states it can evolve towards are both greater than now? That would be the statistical view, I believe. In a film of that, I do not believe you could discern its direction as forward or backward in any way (in contrast to the usual idea of a film of a glass breaking being an example of the time direction being obvious).

Or alternately, think of decoherence: entanglement disperses and decreases as you go forward in time. Does that require a fundamental time asymmetric law to describe as well?

There is a time-symmetric model of the second law that applies to classical physics (and I assume that it can be extended to quantum mechanics, as well).

Imagine taking a human being--okay, for ethical reasons, let it be a guinea pig, instead--and putting it inside an impenetrable, eternal box. No energy or matter can go in or out. Now just wait--a billion years, a trillion years, 10^{100} years, however long it takes. After a while, the guinea pig will die, and decompose and will reach some kind of uninteresting equilibrium state, and its component atoms will remain in that state for an ungodly length of time. But there will always be a certain amount of random thermal motion of the atoms. Purely by chance, if you are willing to wait forever, the atoms will eventually arrange themselves to a configuration that is arbitrarily close to the original state of the guinea pig. In other words, the guinea pig will eventually come back to life, a reversal of entropy.

But over an enormous span of time, if you plot entropy as a function of time, what you will find is that:

  1. By far, the most likely configuration is the maximal possible entropy.
  2. Very rarely, the entropy dips down to a non-maximal value.
  3. In almost all such cases, the entropy returns quickly to a higher value.

entropy.jpg


The picture shows a typical plot of entropy vs time. Situations of type A are vastly more likely than situations of type B, which are vastly more likely than situations of type C, etc. So whatever the entropy is, if it's not the maximal value, then you are overwhelmingly likely to have higher entropy in the future, even though the graph is completely symmetric between past and future.

So the guinea pig, looking forward in such a universe can assume the second law of thermodynamics. He will likely age, die, and decompose just as the second law predicts.

What's weird about this thought experiment is that while the guinea pig can safely assume that he will be older and more decrepit in the future, he can't assume that he will be younger and in better health in the past [edit: was 'future']. In this model, the most likely past for the guinea pig is one in which he is older than now. It's overwhelmingly likely that right now the guinea pig is youngest he has been for millennia and the youngest he will be for millenia to come.
 
Last edited:
  • #73
kith said:
So this film would show the evacuation of the chamber before the experiment and the flooding with air afterwards?

...

Puh, I think this really leads off topic.

Probably right about off-topic... :smile:

But as to the film: Consider a film of a volume of air in equilibrium. You cannot tell forward from backward (until you reach the boundary in which it is no longer in equilibrium).

Or: Why do we see the arrow of time as forward? Is that a requirement of (the fundamental law of) increasing entropy? Or is it just a coincidence? Or maybe it has something more to do with initial conditions.
 
  • #74
Ilja said:
That there is an animal named mechanism for causality is new to me, as far as I know causality is fundamental, assumed as given from the start. But, I guess, we think about different things named "causality".

In this case, I'm using causality to mean the propagation of effects. You drop a pebble into a pool of water, and the ripples spread out away from where you dropped it. Someone seeing those ripples will likely conclude that there must have been some disturbance at the apparent source of the outgoing circular waves. But the equations describing the propagation of waves in water are time-symmetric. So it is consistent with those equations to have converging concentric waves as well as diverging waves. So how do you explain why you always see diverging waves, and never see converging waves? It has to do with boundary conditions. The outgoing waves are the only possibility that is consistent with the boundary conditions.
 
  • #75
Ilja said:
My point was not a logical proof, but that there is strong empirical evidence that there is no time symmetry in nature. That there is an animal named mechanism for causality is new to me, as far as I know causality is fundamental, assumed as given from the start. But, I guess, we think about different things named "causality".

How is causality fundamental? Measuring a non-commuting observable on a system in a known eigenstate always produces a random value. That doesn't sound like empirically fundamental anything.

So I say that causes (or influences) from the future would appear (to us) as randomness in the present. Again, I am imagining some kind of time symmetric formulation of QM. That doesn't seem to be more of a stretch then imagining a Bohmian formulation in which all particle positions everywhere are a part of the equation. At least in the TS formulation, your limit of things to consider resides in a nice Einsteinian time cone (albeit in 2 directions).

Of course, beauty is in the eye of the beholder. :biggrin:
 
  • #76
stevendaryl said:
What's weird about this thought experiment is that while the guinea pig can safely assume that he will be older and more decrepit in the future, he can't assume that he will be younger and in better health in the future. In this model, the most likely past for the guinea pig is one in which he is older than now. It's overwhelmingly likely that right now the guinea pig is youngest he has been for millennia and the youngest he will be for millenia to come.
Yeah. In relation to the origin of the universe, this argument states that it is incredibly more likely, that all of the observable universe was created in a fluctuation just a moment ago, instead of in a fluctuation with a much lower entropy during the Big Bang. I think this goes back to Boltzmann and it is quite puzzling.

DrChinese said:
Or: Why do we see the arrow of time as forward? Is that a requirement of (the fundamental law of) increasing entropy? Or is it just a coincidence? Or maybe it has something more to do with initial conditions.
I see this more clearly now, thanks. I still have a gap in my understanding regarding the relation between coarse-grained entropy and microscopic theory, though.
 
  • #77
Yakir Aharonov's time symmetric interpretation of quantum mechanics (TSQM) offers a way to explain the EPR paradox and preserve local realism. (A TSQM-based explanation of the EPR Paradox was the #18 post in this discussion.) Please note that Yakir Aharonov was a student of Bohm and was very familiar with the deBroglie–Bohm theory (dBB). TSQM replaces dBB and, as noted above, provides a way to explain EPR, where dBB does not. For an introduction to TSQM, see post #23. For a few of the experiments which confirmed results TSQM had uniquely predicted, see post #24. As to the "realism" question, see posts #25 and 26.
 
  • #78
Regarding the Arrow of time and the Second law of thermodynamics, as “QM freak” it’s easy to forget gravity:

AB230924-FA4D-9EAC-5E5E8D5152C227B1_4.jpg


Clearly the initial conditions in the early universe, is what gives the direction and destruction of exergy. Energy can’t be destroyed but exergy can, and exergy is the fuel that drives the universe.

My guess is that the perplexity regarding T-symmetry etc will be gone once we get a complete theory for QM gravity, hopefully...(The thing that tickles me is the question – What if gravity was “turned on” after “matter creation”?? – there you have the special initial conditions in a little box! ;)
 
Last edited by a moderator:
  • #79
kith said:
I see this more clearly now, thanks. I still have a gap in my understanding regarding the relation between coarse-grained entropy and microscopic theory, though.

The idea about how the Liouville Theorem is consistent with increasing coarse-grained entropy is illustrated with the following picture: Imagine a system starting out with an uncertainty given by a certain compact volume in phase space, as shown on the left. With time, that simple shape evolves to a much more complex shape, such as the one on the right. The shape has the same actual volume as it did previously. But if you do coarse-graining, and ignore the details of the shape, the shape on the right appears to have a larger volume of phase space than the one on the left. So coarse-graining (ignoring tiny details)

liouville.jpg
 
  • #80
Ilja said:
But it is essentially forbidden. String theory publishes thousands of articles without a single empirical prediction. Based on the alternative approach, you can be happy if you succeed to publish a single paper, if you succeed to derive the whole particle content of the standard model from simple principles applied to a quite simple model (arXiv:0908.0591), and you can be sure that nobody even looks at it - once this horrible approach requires a preferred frame.
This effect, well known to all of us working on less popular theories in physics, has more to do with sociology and psychology than with science. I was thinking a lot about it and concluded that scientists are just like all other "ordinary" people. Even if they are more intelligent than the average, they are not much more rational. But let me not enter into the details, because that would be off topic ...
 
  • #81
DrChinese said:
How is causality fundamental?
There is no explanation in terms of anything more fundamental.

Measuring a non-commuting observable on a system in a known eigenstate always produces a random value. That doesn't sound like empirically fundamental anything.
Different values of the probability have also some cause. Not? By the way, I'm not talking about "empirically fundamental". This sounds like a contradiction for me. Empirical predictions are always quite complex derived things.

So I say that causes (or influences) from the future would appear (to us) as randomness in the present. Again, I am imagining some kind of time symmetric formulation of QM. That doesn't seem to be more of a stretch then imagining a Bohmian formulation in which all particle positions everywhere are a part of the equation. At least in the TS formulation, your limit of things to consider resides in a nice Einsteinian time cone (albeit in 2 directions).
I don't follow. In dBB it depends on a 3-dimensional configuration, in your TS on two four-dimensional. By the way, if the future already exist, we need no causality at all. The future simply remains as it is, it does not have to change at all.
 
  • #82
Jon_Trevathan said:
Yakir Aharonov's time symmetric interpretation of quantum mechanics (TSQM) offers a way to explain the EPR paradox and preserve local realism. (A TSQM-based explanation of the EPR Paradox was the #18 post in this discussion.) Please note that Yakir Aharonov was a student of Bohm and was very familiar with the deBroglie–Bohm theory (dBB). TSQM replaces dBB and, as noted above, provides a way to explain EPR, where dBB does not. For an introduction to TSQM, see post #23. For a few of the experiments which confirmed results TSQM had uniquely predicted, see post #24. As to the "realism" question, see posts #25 and 26.
Seen them and answered in #28.

Anyway, causal influence from the future violates Einstein causality too, it allows only causal influences from the past light cone. Thus, Einstein causality would be dead even in your choice. If it is not about causal influence from the future (which is how I interpret the paper) then it is about something different and irrelevant as an explanation of the violation of Bell's inequality.

For me, causal influences from the future are mystical sci-fi nonsense not worth to be considered seriously. To take it seriously, one would need extremely strong empirical evidence. Something completely unexplainable with classical causality as in dBB. If you think otherwise, your choice.
 
  • #83
stevendaryl said:
But if you know the position of a particle at all times, then you know the velocity at all times (well, if the position is a differentiable function of time). Yet position and velocity are non-commuting.
Yes, but I think that's not what Dr Chinese had in mind.
 
  • #84
kith said:
Thanks, that's a nice point of view. I still don't understand something: both the quantum potential and the probabilities are derived from the wave function. The wave function and the potential are regarded as ontic in the PBR sense. So where does the epistemicity -which is reflected by the probabilities- come from? Or speaking in terms of classical mechanics: we seem to have an equation for a state of knowledge ρ which describes the motion of some particles in an ontic potential V(ρ). This is hard to reconcile for me.
Let me use a simple classical analogy. Suppose that you have lost your keys in your apartment, but you have no idea in which room have you lost them. What you know is that some rooms are bigger and others are smaller. The rooms themselves and their size are ontic properties. Now, do these ontic properties imply some epistemic (probabilistic) properties as well? Yes they do. You can easily conclude that the probability of finding the keys in a given room is proportional to the size of the room. It is more likely that you will find the keys in a bigger room than in a smaller one.

In Bohmian mechanics, instead of the room you have wave function, and instead of the room's size you have |psi|^2. The bigger |psi|^2 at a given point, the bigger probability that you will find the particle there.
 
  • #85
Thanks, stevendaryl and Demystifier. I have learned quite a bit from this thread. :-)
 
  • #86
Ilja said:
There is no explanation in terms of anything more fundamental.

Circular reasoning. You assume that which you conclude, which is that causality rules. That would more or less force you down the Bohmian path. And voila...
 
  • #87
Ilja said:
Seen them and answered in #28.
For me, causal influences from the future are mystical sci-fi nonsense not worth to be considered seriously. To take it seriously, one would need extremely strong empirical evidence. Something completely unexplainable with classical causality as in dBB. If you think otherwise, your choice.

You need to read the papers I cited.
 
  • #88
@Jon_Trevathan: I have read (and cited) one of them, what was interesting for me has been clarified, I have no interest in interpretations which use confusing time-symmetric notions to describe a time-asymmetric world.

@DrChinese: No circular reasoning because I have never claimed that I can somehow conclude that causality has to be fundamental. In my opinion it is, and I have never seen a meaningful approach where it has been non-fundamental, derived from something different. Feel free to introduce me to such an approach.
 
  • #89
An interesting paper that kind of relates to the topic of this thread:
Theorem 16 (PBR). For any preparation independent theory that reproduces(a certain set of ) quantum correlations, the wavefunction is ontic.Motivated by this, we present a weak version of Bell’s theorem [3], in which we additionally assume preparation independence. The proof here is similar to that of proposition 14 and striking for its simplicity. The theorem could also be regarded as a combination of the PBR theorem, and a result closely related to the following, proved in [8].

Theorem 17. Quantum mechanics is not realisable by any preparation independent, local theory. Proof. If quantum mechanics is realisable by a preparation independent theory then, by the PBR theorem, the wavefunction is ontic with respect to that theory. We proceed by showing that there exist quantum correlations that cannot be realized by any local model for which the wavefunction is ontic...
On the Reality of Observable Properties
http://arxiv.org/pdf/1306.3216.pdf

If I'm interpretating this correctly, this is the reason why Leifer argued that using PBR we can now "infer nonlocality directly from EPR":
As emphasized by Harrigan and Spekkens, a variant of the EPR argument favoured by Einstein shows that any psi-ontic hidden variable theory must be nonlocal. Thus, prior to Bell's theorem, the only open possibility for a local hidden variable theory was a psi-epistemic theory. Of course, Bell's theorem rules out all local hidden variable theories, regardless of the status of the quantum state within them. Nevertheless, the PBR result now gives an arguably simpler route to the same conclusion by ruling out psi-epistemic theories, allowing us to infer nonlocality directly from EPR.
PBR, EPR, and all that jazz
http://www.aps.org/units/gqi/newsletters/upload/vol6num3.pdf
 
Last edited:
  • #90
Ilja said:
@DrChinese: No circular reasoning because I have never claimed that I can somehow conclude that causality has to be fundamental. In my opinion it is, and I have never seen a meaningful approach where it has been non-fundamental, derived from something different. Feel free to introduce me to such an approach.

Umm, Quantum Mechanics?

Perhaps you know of *something* where indeterminism (raw chance) does not play a part. Anything actually. How about human behavior? Ever seen the slightest indication that A causes B there? Structure of the universe, what caused the sun to be where it is and the Earth to be where it is. Anything...?

And if you even bother to mumble something about initial conditions, you will really bring a smile to my face. :smile: In fact you already have...
 
  • #91
DrChinese said:
Umm, Quantum Mechanics?
How does QM derive causality?

Perhaps you know of *something* where indeterminism (raw chance) does not play a part. Anything actually. How about human behavior? Ever seen the slightest indication that A causes B there?
?? Suggests that you seem to think that indeterminism somehow is in contradiction with causality.

Structure of the universe, what caused the sun to be where it is and the Earth to be where it is. Anything...?
And if you even bother to mumble something about initial conditions, you will really bring a smile to my face. :smile: In fact you already have...
As I said, it seems that your notion of causality is very different from my ideas about causality.
 
  • #92
Ilja said:
How does QM derive causality?

I can let Dr. Chinese answer for himself, but I thought the point of quantum mechanics is that causality isn't fundamental. There is no causality at the level of microscopic physical laws, so the appearance of causality at the macroscopic is some kind of emergent phenomenon.
 
  • #93
This is one position. But I doubt it is well justified, because it depends on the interpretation.

Essentially, independent of the physical theory, it is always possible to use a more solipsistic, positivistic interpretation which remains silent about causality at all. In QM such a positivistic interpretation - the minimal interpretation - is quite popular, that's all.

In the Copenhagen interpretation, there are at least some elements of causality, or at least I think so: The measurement is the cause of the collapse of the wave function. The dBB interpretation is a classical causal interpretation.

The key point for me is that a positivistic interpretation cannot derive any causality at all. It can compute, and derive from more fundamental assumptions, probabilities and correlations. That's all. Observation can give only correlation, and theories which allow to compute only observables, that means, probabilities and correlations, are in a similar situation, they can give only correlations.

You need a theoretical hypothesis to go beyond correlations. Causality is something about the underlying reality.
 
  • #94
Ilja said:
The key point for me is that a positivistic interpretation cannot derive any causality at all. It can compute, and derive from more fundamental assumptions, probabilities and correlations. That's all. Observation can give only correlation, and theories which allow to compute only observables, that means, probabilities and correlations, are in a similar situation, they can give only correlations.

That's certainly right. You can't derive causality from mere correlation. However, the phenomena that gave rise to our notions of causality can be understood without actually using causality. In this case, causality would be an effective theory, rather than fundamental, in the same sort of way that thermodynamics is an effective theory, while the more fundamental theory is the physics of many interacting particles.

I don't think it's accurate to describe non-causal theories as "solipsistic". I would almost go so far as to reverse that. It's human nature to prefer causal theories, but there is no reason for the world to try to work in a way that is intuitively understandable to humans.
 
  • #95
Ilja said:
You need a theoretical hypothesis to go beyond correlations. Causality is something about the underlying reality.

I'm not convinced that there is a non-fuzzy notion of causality that goes beyond correlations. People typically are satisfied with a theory that predicts future states of the world in terms of past states, usually described with differential equations. But a differential equation is simply stating a correlation between future states and past states. It doesn't actually say that the past causes the future. What additional thing do you need to get causality?

I'm not sure.
 
  • #96
What you need is a theory. A theoretical hypothesis.

A deterministic equation can be considered, of course, as a particular example of a causal theory. You have the equation of the theory, and the initial state, the result follows with certainty. But, of course, the causal theory is a little bit more: It also presupposes a direction of time (causal influence is from past to future).

But there may be, of course, also causal theories which are not deterministic. Something like the initial conditions A cause B, but we do not observe B but instead B' which with some probability 5% differs from B. Or A and B and C together causes D, but unfortunately we cannot prepare A and B and C with certainty, and A and B, together with some null assumption about C, gives D with probability 95% or so.

That causality goes beyond correlation is obvious. Correlation gives us p((A and B) or (not A and not B)) = 1.
Causality gives us A causes B, or B causes A, or C causes A and C causes B, already three different theories, in fact an infinity because for different C we have different causal explanations.
 
  • #97
What is the status of measurement problem in the light of PBR theorem?
 
  • #98
  • #99
Ilja said:
What you need is a theory. A theoretical hypothesis.

But how does the theory predict that a correlation is actual a causal relationship? I'm not convinced that the word "cause" plays any role in physics that can't be played by "correlation".
 
  • #100
eloheim said:
Does considering a specific scenario, like the one presented below, help any in sorting out the differing notions of causality (and implications thereof) discussed in this thread?

Quantum correlations with no causal order

Here's a popular article describing the research. I know these things tend to be sloppy but I had the bookmarks together so please don't hate me:blushing::

Quantum causal relations: A causes B causes A

Thanks for the references.
 

Similar threads

Back
Top