A An argument against Bohmian mechanics?

  • #251
vanhees71 said:
Hm, out of curiosity of course, but when I can't check my ideas experimentally, I'm not doing of much value for science.
Then, by the same token, I think about Bohmian mechanics out of curiosity too, with the same caveat on the value for science. So whenever someone tells you that he likes to think about BM, just tell yourself: "Aaa, I get it, that's not really science but just curiosity, the same thing I am doing when I think about magic tricks.:woot: "
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #252
atyy said:
Quantum equilibrium is analogous to thermodynamic equilibrium, so yes, it can be disturbed.
Unless the whole universe is in the equilibrium, in which case the disturbance is only possible as a small-probability fluctuation.
 
  • Like
Likes vanhees71
  • #253
atyy said:
Who is "nobody"? Landau and Lifshitz? Weinberg? I hate to argue from authority, but you are simply wrong that nobody talks about a cut when two of the most distinguished quantum mechanics textbooks mention it.
Well, where in these books is this idea ever really used, and where is it necessary to be used? I just take a measurement apparatus and measure something. I don't need a cut to use it to check the predictions of quantum theory.
 
  • #254
vanhees71 said:
Well, where in these books is this idea ever really used, and where is it necessary to be used? I just take a measurement apparatus and measure something. I don't need a cut to use it to check the predictions of quantum theory.

They use it when they do not include the measurement apparatus in the quantum state.
 
  • #255
As someone who likes to explain magic tricks, I still find the comparison to BM a bit outrageous. It's a terrible analogy. If your explanation requires invisible pink unicorns that are spread over the whole universe and need to conspire over cosmic distances in order to make the rabbit appear, then I would reject the explanation immediately, even though I would have to admit that technically, it's not excluded by observations. Just because something is an explanation, it doesn't mean it's a rational explanation. BM is possibly one of the least rational explanations that people have come up with in the history of science.
 
  • Like
Likes vanhees71
  • #256
atyy said:
They use it when they do not include the measurement apparatus in the quantum state.
Where concretely use my experimenter colleagues the cut when they analyze an experiment at the LHC? As far as I now they use a data file provided by the detectors, which are real-world measurement devices to register particles. The corresponding outcomes of measurements are compared to the (probabilistic) predictions of quantum theory. Nowhere do they need to assume a cut to design their experiments and evaluate the data in terms of cross sections, predicted by QT.
 
  • #257
For experimentalists the location of the cut is unimportant. There is a mathematical formalism that wraps up the whole thing. That said it is most unsatisfying if you believe there is cut. The model doesn't distinguish between states pre and post cut as far as I can tell, making our formulaism a shorthand.
 
  • Like
Likes vanhees71
  • #258
vanhees71 said:
Where concretely use my experimenter colleagues the cut when they analyze an experiment at the LHC? As far as I now they use a data file provided by the detectors, which are real-world measurement devices to register particles. The corresponding outcomes of measurements are compared to the (probabilistic) predictions of quantum theory. Nowhere do they need to assume a cut to design their experiments and evaluate the data in terms of cross sections, predicted by QT.

Concretely, the cut is used when they claim to have made an observation. You can tell they have made a cut if they claim certain observations are consistent with quantum mechanics, and they have used the Born rule.
 
  • #259
rubi said:
As someone who likes to explain magic tricks, I still find the comparison to BM a bit outrageous. It's a terrible analogy. If your explanation requires invisible pink unicorns that are spread over the whole universe and need to conspire over cosmic distances in order to make the rabbit appear, then I would reject the explanation immediately, even though I would have to admit that technically, it's not excluded by observations. Just because something is an explanation, it doesn't mean it's a rational explanation. BM is possibly one of the least rational explanations that people have come up with in the history of science.

But you like Consistent Histories, which means your dislike of BM is only a matter of tase - unlike vanhees71, which is a technical disagreement. If we apply vanhees71's view, Consistent Histories is also pointless.
 
  • #260
ShayanJ said:
Technological limits aside, can you think of any experimental setup that is able to probe quantum non-equilibrium?

Valentini has done some work on this, but it has limitations that were discussed by Demystifier in his posts above.

Anyway, the basic idea is that unless there is fine tuning, it is unlikely the universe was created in equilibrium. If the universe was created in non-equilibrium, there may still be some signatures of that nonequilibrium observable today.
 
  • #261
atyy said:
But you like Consistent Histories, which means your dislike of BM is only a matter of tase - unlike vanhees71, which is a technical disagreement. If we apply vanhees71's view, Consistent Histories is also pointless.
Can you summarize, what Consistent Histories claims beyond the minimal interpretation? Perhaps, I'm too pragmatic to realize, where the problem with the minimal interpretation is, but I just don't get, why it should help to introduce any elements of interpretation that go beyond Born's rule, which establishes the meaning of the formalism concerning observable (and observed!) facts about nature.
 
  • #262
atyy said:
Concretely, the cut is used when they claim to have made an observation. You can tell they have made a cut if they claim certain observations are consistent with quantum mechanics, and they have used the Born rule.
I still don't understand, where the cut is made. Experimentalists measure something by repeating for many times a preparation and measurement procedure and then analyze the experiment statistically. That's the way you "test hypotheses" in the sense of probability theory, and QT is just a probability theory for physical processes in nature, not more not less. There's no quantum-classical cut used anywhere. Also the construction of most measurement devices are based on QT nowadays since most are based on semiconductor technology, which is based on condensed-matter many-body QT. In other words, there is no clear boundary between classical behavior of macroscopic objects and quantum behavior of microscopic ones. The former is a more or less applicable approximation of the latter to describe macroscopic ("relevant") observables. It's no fundamental cut, but the application of an approximation.

Nobody talks about any "cut" when one uses non-relativistic approximations in classical mechanics or electrodynamics. There the non-relativistic treatment is a more or less applicable approximation to the fully relativistic one. That's the usual structure of physical theories: Different models or theories that are successful in describing certain phenomena can be approximations of each other. The more comprehensive theory tells us the range of validity of the approximations. The same holds for QT vs. classical approximations.

Historically the cut is due to the Heisenberg flavor of the Copenhagen interpretation and enters the game only because of the collapse hypothesis, which in my opinion is as superfluous and misleading as the introduction of a cut.
 
  • #263
vanhees71 said:
I still don't understand, where the cut is made. Experimentalists measure something by repeating for many times a preparation and measurement procedure and then analyze the experiment statistically. That's the way you "test hypotheses" in the sense of probability theory, and QT is just a probability theory for physical processes in nature, not more not less. There's no quantum-classical cut used anywhere.

Yes, there is. The probabilities occur in quantum mechanics (or QFT) in the form of the Born rule:

When you perform a measurement on a system, the result is one of the eigenvalues of the corresponding operator and the probability of an outcome is the square of the amplitude for the system to be in the corresponding eigenstate.

The use of probabilities involves a cut: You can describe it in several different ways:
  1. The cut between the system, which is the thing being measured, and the observer, which is the thing being measured.
  2. The cut between ordinary interactions, which are described by terms appearing in the Hamiltonian, and measurements, which are described by the Born rule. The system evolves smoothly and deterministically according to Hamiltonian dynamics in response to ordinary interactions, but measurements behave probabilistically.
  3. The cut between microscopic systems, which are described by quantum amplitudes and which can be in superpositions of drastically different states, and macroscopic systems, which are observed to always have approximately definite values for quantities such as position. (By "approximately definite", let me illustrate by example: An automobile's position is subject to imprecision--you can't really say where it is with a precision much greater than the size of the automobile. But you would never say that it is uncertain whether an automobile is in London or New York City. So to within the limits of precision, its location is definite.)
To apply the Born rule, you must declare some aspect of the universe to be on the observer/measurement/macroscopic side of the cut.

This seems obvious to me. Certainly, if you consider two electrons interacting, does it make any sense to say that one electron is measuring the z-component of the spin of the other electron? Obviously not. The two electrons interact, and that interaction might depend on their spins, but there is never a point where one electron can be said to have measured the spin of the other. The Born rule can only be applied if there is a system that is macroscopic, that is capable of forming permanent records of past interactions. So it requires a "cut" between those parts of the universe that are treated using (reversible) microscopic dynamics and those parts that involve irreversible changes during a measurement interaction. If the measurement device is described by reversible dynamics, then there is nowhere to apply the Born rule.
 
  • #264
But the interactions described by QT are the interactions between the measured object and the measurement apparatus, or at least there's no hint that only because something is constructed by men as a measurement device, all of a sudden different than the fundamental interactions described by QT are at work. This reminds me somehow at the old idea that there's a special "vis viva" at work disinguishing natural laws valid for living organisms vs. non-living matter (even there it's hard to clearly define a boundary, e.g., are viruses or prions living beings or not). That macroscopic devices are able to store information about the measured quantum system doesn't imply that there are other physical laws at work as those described by the Standard Model, i.e., QT.
 
  • #265
It seems to me the discussion about the Heisenberg cut is not about its existence in the theory, but about whether that is a problem or not.
The cut is inherent to the construction of QM theory and in simple terms consists of the non-debatable coexistence in the formalism of classical observables and quantum probabilities per the Born rule. Whether one considers this a problem for the theory or just accepts it naturally like vanhees does should be a personal conclusion, not a source of dispute.
 
Last edited:
  • #266
vanhees71 said:
But the interactions described by QT are the interactions between the measured object and the measurement apparatus,

But the additional rule, that a measurement result only results in an eigenvalue, with probabilities given by the square of the amplitude, applies only to measurements. So if you have a rule that only applies to some kinds of interactions, and not others, then you have a cut. You certainly can't apply that rule to the interaction between two electrons; you can't say that one electron is measuring something about the other electron, and will get such and such a result with such and such a probability.

What von Neumann first noted was that you can always move the cut to enlarge the part of the universe that is on the "microscopic" side, but you can't eliminate it. Without the cut, you don't have the Born rule. If you analyzed everything using microscopic dynamics, then there is suddenly no role for probabilities (unless the people trying to derive probabilities for Many Worlds succeed).
 
  • Like
Likes RockyMarciano
  • #267
Also IMO the cut doesn't exist in nature, just in the QM formalism. In BM (and in MWI) the cut in the formalism is taken as real, and so it needs to build a classical ontology coexisting with the quantum phenomenology.
 
  • Like
Likes Mentz114
  • #268
RockyMarciano said:
It seems to me the discussion about the Heisenberg cut is not about its existence in the theory, but about whethertthat is a problem or not.
The cut is inherent to the construction of QM theory and in simple terms consists of the non-debatable coexistence in the formalism of classical observables and quantum probabilities per the Born rule. Whether one considers this a problem for the theory or just accepts it naturally like vanhees does should be a personal conclusion, not a source of dispute.

You can certainly say, as Von Neumann did, that there are two kinds of interactions: normal interactions that are described by Schrodinger's equation (or quantum field theory), and measurement interactions that involve a discontinuous change to a system. That's ugly, but it's coherent. What seems incoherent to me is accepting the practical advantages to the two types of interaction while also denying that there is anything special about measurement. That seems inconsistent, not just a matter of personal taste.
 
  • #269
RockyMarciano said:
Also IMO the cut doesn't exist in nature, just in the QM formalism. In BM (and in MWI) the cut in the formalism is taken as real, and so it needs to build a classical ontology coexisting with the quantum phenomenology.

Yes, but if the cut exists only in the formalism, that suggests (to me) that its appearance is due to not having the formalism figured out completely. Something analogous might be Special Relativity. The elegant way that SR is taught today makes no reference to a preferred rest frame. However, you can imagine an alternate history in which Einstein never came along, and we were stuck with an ad-hoc theory in which there is a preferred reference frame such that:
  • In that frame, light travels at speed c in all directions.
  • Clocks (and other systems that change internal state with time) moving relative to this frame run slower.
  • Physical objects moving relative to this frame are contracted in the direction of their motion.
You can make such an ad hoc theory to be observationally equivalent to SR, but you're stuck with an unobservable preferred rest frame. People could then speculate that this is only needed for the formalism, but isn't really part of nature.
 
  • Like
Likes kurt101 and Jilang
  • #270
stevendaryl said:
You can certainly say, as Von Neumann did, that there are two kinds of interactions: normal interactions that are described by Schrodinger's equation (or quantum field theory), and measurement interactions that involve a discontinuous change to a system. That's ugly, but it's coherent. What seems incoherent to me is accepting the practical advantages to the two types of interaction while also denying that there is anything special about measurement. That seems inconsistent, not just a matter of personal taste.
Agreed. It is nevertheless an inconsistence that doesn't bother experimentalists(and a certain theorist which appears to have an experimentalist soul ;-)) in their applying the theory and it is no use trying to make them feel the inconsistence.
 
  • #271
RockyMarciano said:
Agreed. It is nevertheless an inconsistence that doesn't bother experimentalists(and a certain theorist which appears to have an experimentalist soul ) in their applying the theory and it is no use trying to make them feel the inconsistence.

I have to agree. Logically, the appearance of an inconsistency is a catastrophe, because in an inconsistent system, you can prove anything. But practically, people can live perfectly well with an inconsistency, they just learn how to step around it without stepping into it (so to speak).
 
  • #272
stevendaryl said:
Yes, but if the cut exists only in the formalism, that suggests (to me) that its appearance is due to not having the formalism figured out completely. Something analogous might be Special Relativity. The elegant way that SR is taught today makes no reference to a preferred rest frame. However, you can imagine an alternate history in which Einstein never came along, and we were stuck with an ad-hoc theory in which there is a preferred reference frame such that:
  • In that frame, light travels at speed c in all directions.
  • Clocks (and other systems that change internal state with time) moving relative to this frame run slower.
  • Physical objects moving relative to this frame are contracted in the direction of their motion.
You can make such an ad hoc theory to be observationally equivalent to SR, but you're stuck with an unobservable preferred rest frame. People could then speculate that this is only needed for the formalism, but isn't really part of nature.
Well, you still have people harping on with the ether.
 
  • #273

stevendaryl said:
Yes, but if the cut exists only in the formalism, that suggests (to me) that its appearance is due to not having the formalism figured out completely. Something analogous might be Special Relativity. The elegant way that SR is taught today makes no reference to a preferred rest frame. However, you can imagine an alternate history in which Einstein never came along, and we were stuck with an ad-hoc theory in which there is a preferred reference frame such that:
  • In that frame, light travels at speed c in all directions.
  • Clocks (and other systems that change internal state with time) moving relative to this frame run slower.
  • Physical objects moving relative to this frame are contracted in the direction of their motion.
You can make such an ad hoc theory to be observationally equivalent to SR, but you're stuck with an unobservable preferred rest frame. People could then speculate that this is only needed for the formalism, but isn't really part of nature.
I didn't calibrate the content of this in my previous response, it is a deep reflection. Bit it leads to doubt about the formalism.
 
  • #274
stevendaryl said:
But the additional rule, that a measurement result only results in an eigenvalue, with probabilities given by the square of the amplitude, applies only to measurements. So if you have a rule that only applies to some kinds of interactions, and not others, then you have a cut. You certainly can't apply that rule to the interaction between two electrons; you can't say that one electron is measuring something about the other electron, and will get such and such a result with such and such a probability.

What von Neumann first noted was that you can always move the cut to enlarge the part of the universe that is on the "microscopic" side, but you can't eliminate it. Without the cut, you don't have the Born rule. If you analyzed everything using microscopic dynamics, then there is suddenly no role for probabilities (unless the people trying to derive probabilities for Many Worlds succeed).
Of course, these are part of the postulates of QT, but it doesn't imply that there is a distinct classical world or that the measurement is outside of the laws described by QT. There are no extra rules. Observables are in the formalism represented by self-adjoint operators on a Hilbert space, and possible values these observables take when measured are the eigenvalues of these operators. This is formalism, but not a cut between quantum and classical laws. I also don't see, why it shouldn't be impossible to measure one electron with help of another electron. In the Stern-Gerlach experiment you measure a spin state even by the position of the particle itself. The Born rule is another independent postulate, but it also implies no cut. It just says how to evaluate probabilities for the outcomes of measurements given the state of the system. I don't understand your last sentence. Why isn't there any probablities if I analyze everything using microscopic dynamics? The microscopic dynamics, i.e., QT, describes probabilities and only probabilities. What else should the meaning of this dynamics be than the time evolution of probability distributions for observables?
 
  • #275
stevendaryl said:
But the additional rule, that a measurement result only results in an eigenvalue, with probabilities given by the square of the amplitude, applies only to measurements. So if you have a rule that only applies to some kinds of interactions, and not others, then you have a cut. You certainly can't apply that rule to the interaction between two electrons; you can't say that one electron is measuring something about the other electron, and will get such and such a result with such and such a probability.

OK, that is pretty clear. So if these electrons interact, we can calculate the amplitudes of various outcomes but we cannot ( or should not) square these amplitudes to get probabilities ? I'm a bit confused, as you can see.
 
  • #276
The "amplitudes" are not observable according to standard QT, the probabilities are (on ensembles of accordingly prepared systems)!
 
  • #277
vanhees71 said:
I still don't understand, where the cut is made. Experimentalists measure something by repeating for many times a preparation and measurement procedure and then analyze the experiment statistically. That's the way you "test hypotheses" in the sense of probability theory, and QT is just a probability theory for physical processes in nature, not more not less. There's no quantum-classical cut used anywhere. Also the construction of most measurement devices are based on QT nowadays since most are based on semiconductor technology, which is based on condensed-matter many-body QT. In other words, there is no clear boundary between classical behavior of macroscopic objects and quantum behavior of microscopic ones. The former is a more or less applicable approximation of the latter to describe macroscopic ("relevant") observables. It's no fundamental cut, but the application of an approximation.

Nobody talks about any "cut" when one uses non-relativistic approximations in classical mechanics or electrodynamics. There the non-relativistic treatment is a more or less applicable approximation to the fully relativistic one. That's the usual structure of physical theories: Different models or theories that are successful in describing certain phenomena can be approximations of each other. The more comprehensive theory tells us the range of validity of the approximations. The same holds for QT vs. classical approximations.

Historically the cut is due to the Heisenberg flavor of the Copenhagen interpretation and enters the game only because of the collapse hypothesis, which in my opinion is as superfluous and misleading as the introduction of a cut.

In classical mechanics, eg. general relativity, there is no problem with the notion of the state of the universe. In quantum mechanics, what is the meaning of the quantum state of the universe?
 
  • #278
vanhees71 said:
The "amplitudes" are not observable according to standard QT, the probabilities are (on ensembles of accordingly prepared systems)!
That's exactly what @stevendaryl means(correct me if I'm wrong!). The dynamics of a quantum system only involves probability amplitudes and if no one wants to know anything about the system, no probability comes into play. So a unified description of all phenomena using quantum mechanics, can't involve axioms about probabilities because if all things that happen are governed by quantum mechanics, we should be able to treat measurements with the same language that we treat Schrodinger evolution. The fact that we have to introduce Born's rule to deal with measurements is the Heisenberg cut. Of course you may say that its ridiculous to apply quantum mechanics to macroscopic objects because it'll be unnecessarily complicated and that's why we introduce the Born's rule. But that viewpoint can only be true if you can derive Born's rule from the regular dynamics of quantum systems and that's what all these interpretations are all about. Of course decoherence makes things better for this viewpoint but I'm not sure we can count on it to solve the problem completely. Can we?
 
  • Like
Likes zonde
  • #279
vanhees71 said:
The "amplitudes" are not observable according to standard QT, the probabilities are (on ensembles of accordingly prepared systems)!
I mean use the theory to get predictions using amplitudes !
 
  • #280
atyy said:
But you like Consistent Histories, which means your dislike of BM is only a matter of tase - unlike vanhees71, which is a technical disagreement. If we apply vanhees71's view, Consistent Histories is also pointless.
I don't think it's just a matter of taste to reject Bohmian mechanics. I think the situation is comparable to the epicycle theory. People just couldn't imagine that the Earth might not be in the center of the universe, so they had to come up with contrieved explanations for the motion of the planets. Bohmian mechanics is very similar: People don't want to give up the naive idealization that particles can be modeled as points in ##\mathbb R^3##, so they have to invent absurd mechanisms in order to maintain this idealization. Rationally, there is just no good reason for why nature could be mapped to points in ##\mathbb R^3## (especially if you recognize that ##\mathbb R^3## is just a mathematical object, whose properties depend on the choice of the underlying set theory axioms). It's just an idealization, whose domain of applicability is exceeded in the quantum regime. Progress in science depends on recognizing wrong ideas and replacing them by something better. Adhering to wrong ideas has never led to scientific progress and indeed, I'm not aware of a single relevant discovery that has emerged from Bohmian mechanics. On the other hand, great advances (such as QFT and the standard model) were made by taking the quantum formalism seriously. If the only positive thing that can be said about a theory is that it is not technically excluded by observations, then it's pretty clear that it is a dead end.

From reading vanhees' posts, I think he is secretly a consistent histories advocate without knowing it yet.

vanhees71 said:
Can you summarize, what Consistent Histories claims beyond the minimal interpretation? Perhaps, I'm too pragmatic to realize, where the problem with the minimal interpretation is, but I just don't get, why it should help to introduce any elements of interpretation that go beyond Born's rule, which establishes the meaning of the formalism concerning observable (and observed!) facts about nature.
Consistent histories is essentially the minimal interpretation stated with more conceptual clarity. It keeps all the concepts from Copenhagen, but it interprets time evolution as a stochastic process, much like classical Brownian motion. The insertion projection operators between the time evolution doesn't correspond to any physical process. Instead, it just selects a subset of histories from the path space, whose probability of occurring is to be calculated. It's completely analogous to the insertion of characteristic functions in the case of Brownian motion. No explicit references to measurements remain and all quantum paradoxes are resolved.

In Brownian motion, the Wiener measure on the space of Brownian paths is constructed by specifying it on so called cylinder sets of paths ##x(t)##:
$$O^{t_1 t_2 \ldots}_{B_1 B_2 \ldots}=\{x : x(t_1)\in B_1, x(t_2) \in B_2, \ldots \}$$
For example, the probability for a path (with ##x(t_0)=x_0##) to be in the cylinder set ##O^{t_1 t_2}_{B_1 B_2}## is (up to some normalization factors) given by
$$P(O^{t_1 t_2}_{B_1 B_2})=\int dx_2 dx_1 \chi_{B_2}(x_2) e^{-\frac{(x_2-x_1)^2}{t_2-t_1}} \chi_{B_1}(x_1) e^{-\frac{(x_1-x_0)^2}{t_1-t_0}} \hat = \lVert P_{B_2} U(t_2-t_1) P_{B_1} U(t_1-t_0)\delta_{x_0 t_0}\rVert$$
Here, I have defined the projections ##(P_B f)(x)=\chi_B(x) f(x)## and the time evolution operators ##U(t)=e^{-t\Delta}##, which are just expressed as integrations against the heat kernel in the above integral. Of course, nobody would think of the projectors ##P_B## as a form of time evolution in the totally classical case of Brownian motion. The Brownian particle just follows some random path and the the probability for a given set of paths just happens to involve this projector.

In quantum mechanics, the situation is completely analogous. The projectors of the position operator ##\hat x## are also given by characteristic functions ##\chi_B(x)## and the time evolution of the Schrödinger equation (for example for the free particle) is given by ##U(t)=e^{-it\frac{\Delta}{2m}}##. This suggests in a very compelling way that the projections are not "a different form of time evolution", like the Copenhagen interpretation suggests. Measurements don't play any distinguished role.
 
Last edited:
  • Like
Likes vanhees71, Mentz114 and dextercioby
  • #281
stevendaryl said:
The use of probabilities involves a cut: You can describe it in several different ways:
  1. The cut between the system, which is the thing being measured, and the observer, which is the thing being measured.
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

So from this point of view, the Heisenberg cut seems like a totally ordinary thing which is part of all of science. I don't claim that there's no problem at all but that if there is one, it is surprisingly difficult to pin down.

Also, the omnipresent entanglement of QM suggest that we cannot just ignore a part of the whole without altering something important. So even without the Born rule, QM tells us that the naive application of the scientific method in the way I described above -which involves separating the observer and the environment from the system- may lead to a different behaviour.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)
 
  • #282
But @rubi, for macroscopic objects we see only one history. How does consistent histories explain that?

Also, the quote below is from the book "Do we really understand quantum mechanics":
Which history will actually occur in a given realization of the physical system is not known in advance: we postulate the existence of some fundamentally random process of Nature that selects one single history among all those of the family.
This seems like collapse, more precisely, an objective collapse. Do you reject this @rubi or you have some explanation?
 
  • #283
To go back to BM, I would like to know if there is an empirical way to distinguish a "local realist theory" from a "nonlocal realist theory" like BM. In a previous post Demystifier claimed that even if BM is nonlocal(meaning that it allows FTL influence) in practice since this FTL influence couldn't be observed or measured it actually didn't allow FTL signaling. And if this is the case I have to wonder what is the empirical difference between being local or nonlocal for a realistic theory, and if there is none, the Bell inequalities would model both local and nonlocal realist theories(since they would be empirically indistinguishable), and their experimental violation would serve to reject realist theories in general without any further assumption.
So how are they empirically(i.e. scientifically as opposed to philosophically) distinguished? Anybody knows?
 
  • #284
ShayanJ said:
But @rubi, for macroscopic objects we see only one history. How does consistent histories explain that?
How is that different from ordinary classical Brownian motion? The particle follows exactly one path. We just don't know which one. The Wiener measure specifies the probability density for a certain path. Quantum mechanics is a stochastic theory that specifies probability distributions over spaces of quantum histories, just like Brownian motion or other stochastic processes specify probability distributions over spaces of classical trajectories. We see only one Brownian path and we see only one quantum history.

This seems like collapse, more precisely, an objective collapse. Do you reject this @rubi or you have some explanation?
It has nothing to do with a "collapse". The situation is exactly the same in classical stochastic processes. One Brownian path/quantum history is randomly selected. If you toss a coin, one side is randomly selected. The only difference is the word "quantum".
 
  • #285
kith said:
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

But in QM, the difference is not simply a matter of what to choose to ignore. Different rules apply to measurements than to other types of interactions.

In what I would consider a coherent formalism, you would describe how the world works, independently of observers, and then add physical-phenomenal axioms saying that such-and-such a condition of such-and-such subsystem counts as a measurement of such-and-such a property. There would be no additional physics to the measurement process, since it would just be an ordinary process.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)

Yes.
 
  • #286
vanhees71 said:
Of course, these are part of the postulates of QT, but it doesn't imply that there is a distinct classical world or that the measurement is outside of the laws described by QT. There are no extra rules.

The rule that a measurement results in an eigenvalue with probabilities given by the square of the amplitude IS an extra rule. It only applies to measurement interactions, and not to other types of interactions.

Observables are in the formalism represented by self-adjoint operators on a Hilbert space, and possible values these observables take when measured are the eigenvalues of these operators.

So that's an example of a rule that applies to measurements and not to other interactions. If it applied to other types of interactions, then you wouldn't have to use the phrase "when measured".
 
  • #287
Mentz114 said:
OK, that is pretty clear. So if these electrons interact, we can calculate the amplitudes of various outcomes but we cannot ( or should not) square these amplitudes to get probabilities ?

If there are no measurements, then there are no outcomes. There are only states, and those states evolve deterministically, not probabilistically.
 
  • #288
...
stevendaryl said:
If there are no measurements, then there are no outcomes. There are only states, and those states evolve deterministically, not probabilistically.
Ok, I assume you mean no operator was assumed to operate. Given some initial states and some putative final states, is it possible to calculate the probabilities of the final states ? I have to ask because I'm not sure if this is always possible .

But I take the point about the extra rule ...
 
  • #289
Mentz114 said:
...

Ok, I assume you mean no operator was assumed to operate. Given some initial states and some putative final states, is it possible to calculate the probabilities of the final states ? I have to ask because I'm not sure if this is always possible .

But I take the point about the extra rule ...

Given a final state, we can calculate an amplitude for the system ending up in that final state, and we can square that to get a probability. But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state |u\rangle (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
 
  • #290
vanhees71 said:
I don't understand your last sentence. Why isn't there any probablities if I analyze everything using microscopic dynamics? The microscopic dynamics, i.e., QT, describes probabilities and only probabilities. What else should the meaning of this dynamics be than the time evolution of probability distributions for observables?

There are no probabilities in QM without a choice of a basis. The microscopic evolution doesn't select a basis.
 
  • #291
stevendaryl said:
Given a final state, we can calculate an amplitude for the system ending up in that final state, and we can square that to get a probability. But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state |u\rangle (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
Thanks, I guess this is not a good time to pursue this but I'm specifically interested in an interaction because symmetries will come into play which will restrict the outcomes ( I think ).

Anyway, since this is my last post this year, all the best to you and all PF'ers for 2017.
 
  • #292
Mentz114 said:
Thanks, I guess this is not a good time to pursue this but I'm specifically interested in an interaction because symmetries will come into play which will restrict the outcomes ( I think ).

Anyway, since this is my last post this year, all the best to you and all PF'ers for 2017.

Happy New Year!
 
  • #293
stevendaryl said:
But the problem is that there are infinitely many possible final states, and the corresponding probabilities don't add up to 1 (they add up to infinity). For a concrete example, if you prepare an electron in the spin state |u\rangle (spin-up in the z-direction), and there are no interactions acting on it at all, then:
  • It has probability 1 of ending up spin-up in the z-direction.
  • It has probability 1/2 of ending up spin-up in the x-direction.
  • It has probability of ending up spin-up in the y-direction.
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2} } \hat{x} + \frac{1}{\sqrt{2}} \hat{y}
  • It has probability 1/2 of ending up spin-up in the direction \frac{1}{\sqrt{2}} \hat{x} - \frac{1}{\sqrt{2}} \hat{y}
  • etc.
If you don't say what's being measured, then you have no way to chop down the set of possibilities to a set of exclusive alternatives whose probabilities add up to 1.
This problem is resolved in CH by noting that your alternatives are not mutually exclusive. Probabilities don't need to add up to 1 if the alternatives are not mutually exclusive. You need to choose a set of mutually exclusive alternatives and you will get a proper probability distribution. Several such choices exist and experimental results will always be consistent with any such choice; the physics doesn't depend on it. However, many choices will answer different questions than the questions you're interested in. That's not problematic as long as the physics is consistent with any choice.
 
  • #294
rubi said:
From reading vanhees' posts, I think he is secretly a consistent histories advocate without knowing it yet.

CH does not preserve unitary evolution, so he is more likely a secret MWI advocate :biggrin:
 
  • #295
RockyMarciano said:
I would like to know if there is an empirical way to distinguish a "local realist theory" from a "nonlocal realist theory" like BM.

It depends on how you define these terms. Or you could just realize that ordinary language is not well suited to this kind of discussion, and specify theories in terms of their actual math and their actual predictions, which is how we distinguish them in practice. It's easy to test whether a theory's predictions satisfy the Bell inequalities or not. Whether you want to call a theory that violates them "nonlocal" or "non-realist" is a matter of words, not physics.
 
  • #296
kith said:
There is a way to make this sound quite ordinary: if something is measured in the lab, of course there is an observer and of course, he makes the decision what the system of his interest is and what part of the universe he is going to ignore. This is a common characteristic of all scientific experiments.

So from this point of view, the Heisenberg cut seems like a totally ordinary thing which is part of all of science. I don't claim that there's no problem at all but that if there is one, it is surprisingly difficult to pin down.

Also, the omnipresent entanglement of QM suggest that we cannot just ignore a part of the whole without altering something important. So even without the Born rule, QM tells us that the naive application of the scientific method in the way I described above -which involves separating the observer and the environment from the system- may lead to a different behaviour.

(and I suppose that you wanted to write something along the lines of "the observer, who does the measuring" in the quote above)

But why can we ignore part of the universe? Is it because of locality? Why is the universe operationally local, even though reality is nonlocal (or retrocausal etc ...)?

Bohmian Mechanics is one example of emergent operational locality. Holography is another.
 
  • #297
atyy said:
In classical mechanics, eg. general relativity, there is no problem with the notion of the state of the universe. In quantum mechanics, what is the meaning of the quantum state of the universe?
It's an empty phrase in all of physics. I'm not even able to define what "the state of the universe" should mean, no matter whether within classical or quantum physics.
 
Last edited:
  • #298
atyy said:
But why can we ignore part of the universe? Is it because of locality? Why is the universe operationally local, even though reality is nonlocal (or retrocausal etc ...)?

Bohmian Mechanics is one example of emergent operational locality. Holography is another.
We can ignore part of the universe (almost all of the universe in fact), because we can make only pretty local observations and the locality of relativistic QFT ensures the validity of the linked-cluster principle. So far this model of "reality" is pretty successful, and according to this model interactions are local and microcausal. Retrocausality is just a misnomer for the possibility to choose subensembles of full ensembles of measurements due to a fixed measurement protocol. You don't "change the past" you just choose to at which part of the data you look at (take the Walborn et al quantum eraser experiment as a very clear typical example for this kind of "retrocausality").
 
  • #299
ShayanJ said:
That's exactly what @stevendaryl means(correct me if I'm wrong!). The dynamics of a quantum system only involves probability amplitudes and if no one wants to know anything about the system, no probability comes into play. So a unified description of all phenomena using quantum mechanics, can't involve axioms about probabilities because if all things that happen are governed by quantum mechanics, we should be able to treat measurements with the same language that we treat Schrodinger evolution. The fact that we have to introduce Born's rule to deal with measurements is the Heisenberg cut. Of course you may say that its ridiculous to apply quantum mechanics to macroscopic objects because it'll be unnecessarily complicated and that's why we introduce the Born's rule. But that viewpoint can only be true if you can derive Born's rule from the regular dynamics of quantum systems and that's what all these interpretations are all about. Of course decoherence makes things better for this viewpoint but I'm not sure we can count on it to solve the problem completely. Can we?
Without Born's rule, I've no clue what quantum theory is about. Then it's a funny mathematical game to play without any contact to measurements and observations in the real world. Why this should imply that there are two different dynamics for "classical" and "quantum" systems is still an enigma to me. For me the behavior of macroscopic objects is explainable with standard QT and involves a corresponding coarse-graining procedure. It's of course impractical in practically impossible to describe the ##10^{24}## degrees of freedom of 1 mole of gas in some container in all microscopical detail. It's sufficient to describe the relevant observable in the sense of statistical quantum physics.
 
  • #300
rubi said:
From reading vanhees' posts, I think he is secretly a consistent histories advocate without knowing it yet.Consistent histories is essentially the minimal interpretation stated with more conceptual clarity. It keeps all the concepts from Copenhagen, but it interprets time evolution as a stochastic process, much like classical Brownian motion. The insertion projection operators between the time evolution doesn't correspond to any physical process. Instead, it just selects a subset of histories from the path space, whose probability of occurring is to be calculated. It's completely analogous to the insertion of characteristic functions in the case of Brownian motion. No explicit references to measurements remain and all quantum paradoxes are resolved.

In Brownian motion, the Wiener measure on the space of Brownian paths is constructed by specifying it on so called cylinder sets of paths ##x(t)##:
$$O^{t_1 t_2 \ldots}_{B_1 B_2 \ldots}=\{x : x(t_1)\in B_1, x(t_2) \in B_2, \ldots \}$$
For example, the probability for a path (with ##x(t_0)=x_0##) to be in the cylinder set ##O^{t_1 t_2}_{B_1 B_2}## is (up to some normalization factors) given by
$$P(O^{t_1 t_2}_{B_1 B_2})=\int dx_2 dx_1 \chi_{B_2}(x_2) e^{-\frac{(x_2-x_1)^2}{t_2-t_1}} \chi_{B_1}(x_1) e^{-\frac{(x_1-x_0)^2}{t_1-t_0}} \hat = \lVert P_{B_2} U(t_2-t_1) P_{B_1} U(t_1-t_0)\delta_{x_0 t_0}\rVert$$
Here, I have defined the projections ##(P_B f)(x)=\chi_B(x) f(x)## and the time evolution operators ##U(t)=e^{-t\Delta}##, which are just expressed as integrations against the heat kernel in the above integral. Of course, nobody would think of the projectors ##P_B## as a form of time evolution in the totally classical case of Brownian motion. The Brownian particle just follows some random path and the the probability for a given set of paths just happens to involve this projector.

In quantum mechanics, the situation is completely analogous. The projectors of the position operator ##\hat x## are also given by characteristic functions ##\chi_B(x)## and the time evolution of the Schrödinger equation (for example for the free particle) is given by ##U(t)=e^{-it\frac{\Delta}{2m}}##. This suggests in a very compelling way that the projections are not "a different form of time evolution", like the Copenhagen interpretation suggests. Measurements don't play any distinguished role.
That's indeed just minimally interpreted QT. Why one labels it as "consistent histories" is not clear to me yet, but if this is all to it, I've no objections against it.
 
Back
Top