A An argument against Bohmian mechanics?

  • #301
stevendaryl said:
The rule that a measurement results in an eigenvalue with probabilities given by the square of the amplitude IS an extra rule. It only applies to measurement interactions, and not to other types of interactions.
Yes, it's part of the standard rules of QT in the minimal interpretation, but there's nothing special in interactions between the system and the measurement apparatus. It's the same rules since a measurement apparatus consists of the same microsopic building blocks as any other object.

So that's an example of a rule that applies to measurements and not to other interactions. If it applied to other types of interactions, then you wouldn't have to use the phrase "when measured".
I still don't understand, how one can come to such a conclusion. It's just a tautology: If I want to know a (more or less precise) value of an observable, I have to measure it. No matter whether I'm thinking in terms of classical or quantum theory.
 
Physics news on Phys.org
  • #302
stevendaryl said:
There are no probabilities in QM without a choice of a basis. The microscopic evolution doesn't select a basis.
In the standard formalism there's a large freedom in choosing the time evolution picture. Thus states, represented by the statistical operator ##\hat{\rho}(t)## and eigenvectors ##|a(t) \rangle## of a complete set of compatible observables represented by a corresponding set of self-adjoint operators ##\hat{A}(t)##, for themselves have no physical meaning, i.e., are not referring to directly observable/measurable phenomena. That's the case only for the corresponding probability (distributions), $$P(t,a)=\langle a(t)|\hat{\rho}(t)|a(t) \rangle.$$
Thus there are probabilities in standard QT from the very beginning. It's part of the postulates and the only relation between the mathematical formalism to observable/measurable phenomena in nature.
 
  • #303
vanhees71 said:
Without Born's rule, I've no clue what quantum theory is about. Then it's a funny mathematical game to play without any contact to measurements and observations in the real world.
There was a time when the universe was playing that "funny mathematical game"! I'm sure even now there are places in the universe where that game is still being played.
 
  • #305
vanhees71 said:
?
I suppose that's for me!
I meant there was a time where no people were around to observe anything and yet the universe was behaving quantum mechanically. And even in the present, there are parts of the universe where we can't observe and still quantum mechanics applies to them. Unless you're willing to assume quantum mechanics applies only when people are around to observe the result!
 
  • #306
PeterDonis said:
It depends on how you define these terms. Or you could just realize that ordinary language is not well suited to this kind of discussion, and specify theories in terms of their actual math and their actual predictions, which is how we distinguish them in practice. It's easy to test whether a theory's predictions satisfy the Bell inequalities or not.
I did define the terms local and realist in previous posts and both definitions have math content(the math content of being realist is summarized in the Bell inequalities) and the nonlocal/local is defined in terms of physical possibility of FTL signals or not.
Whether you want to call a theory that violates them "nonlocal" or "non-realist" is a matter of words, not physics.
Well, it seems for some the distinction is about physics. Although in fact it seems more about philosophy the way it is presented here: http://fetzer-franklin-fund.org/media/emqm15-reconcile-non-local-reality-local-non-reality-3/
 
  • #307
vanhees71 said:
Yes, it's part of the standard rules of QT in the minimal interpretation, but there's nothing special in interactions between the system and the measurement apparatus.

Then why does the rule single out measurements?

I still don't understand, how one can come to such a conclusion. It's just a tautology: If I want to know a (more or less precise) value of an observable, I have to measure it. No matter whether I'm thinking in terms of classical or quantum theory.

There is no special interaction associated with measurements in classical theory. Look, try to formulate the Born rule without mentioning "measurement" or a macroscopic/microscopic distinction. It is not possible. In contrast, the rules for classical mechanics can be formulated without mentioning measurement. That doesn't mean that there are no measurements in classical mechanics, but that measurements are a derived concept, not a primitive concept.

We could try to make measurements a derived concept in QM, as well.

What is a measurement? Well, a first cut at this is that a measurement is an interaction between one system (the system to be measured) and a second system (the measuring device) so that the quantity to be measured causes a macroscopic change in the measuring device. Roughly speaking, we have:
  • A system being measured that has some operator O with eigenvalues o_i.
  • A measuring device, which has macroscopic states "S_{ready}" (for the state in which it has not yet measured anything) and "S_i" (for the state of having measured value o_i; each macroscopic state will correspond to a huge number of microscopic states.)
  • The device is metastable when in the "ready" state, meaning that a small perturbation away from the ready state will lead to an irreversible, entropy-increasing transition to one of the states S_i.
  • The interaction between system and measuring device is such that if the system is in a state having eigenvalue o_i, then the device is overwhelmingly more likely to end up in the state S_i than any other macroscopic state S_j. (There might be other macroscopic states representing a failed or ambiguous measurement, but I'll ignore those for simplicity.)
Note: There is a bit of circularity here, in that to really make sense of the Born rule, I have to define what a measuring device is, and to define a measuring device, I have to refer to probability (which transitions are much more likely than which others), which in quantum mechanics has to involve the Born rule. What we can do, though, is treat the fact that the device makes a transition to state S_i when it interacts with a system in a pure state with eigenvalue o_i as initially being a matter of empirical observation, or it can be derivable from classical or semiclassical physics.

Then the Born rule implies that if the system to be measured is initially in a superposition of states of the form: \sum_i c_i |\psi_i\rangle where |\psi_i\rangle is an eigenstate of O with eigenvalue o_i, then the measuring device will, upon interacting with the system, make a transition to one of the macroscopic states S_i with a probability given by |c_i|^2.

Now, at this point, I think we can see where the talk about "measurement" in quantum mechanics is something of a red herring. Presumably, the fact that a pure eigenstate |\psi_i\rangle of the system triggers the measuring device to go into macroscopic state S_i is in principle derivable from applying Schrodinger's equation to the composite system, if we know the interaction Hamiltonian. But because of the linearity of quantum evolution, one would expect that if the initial state of the system were a superposition \sum_i c_i |\psi_i\rangle, then the final state of the measuring device would be a superposition of different macroscopic states, as well. (Okay, decoherence will actually prevent the device from being in a superposition, but the combination system + device + environment would be in a superposition.) So the Born rule for measurements can be re-expressed in what I think is an equivalent form that doesn't involve measurements at all:

If a composite system is in a state of the form |\Psi\rangle = \sum_i c_i |\Psi_i\rangle, where for i \neq j, |\Psi_i\rangle and |\Psi_j\rangle represent macroscopically distinguishable states, then that means that the system is in exactly one of the states |\Psi_i\rangle, with probability |c_i|^2

Note the difference with the usual statement of the Born rule: I'm not saying that the system will be measured to have some eigenvalue \lambda_i with probability |c_i|^2, because that would lead to an infinite regress. You would need a microscopic system, a measuring device to measure the state of the microscopic system, a second measuring device to measure the state of the first measuring device, etc. The infinite regress must stop at some point where something simply HAS some value, not "is measured to have some value".

But with this reformulation of Born's rule, which I'm pretty sure is equivalent to the original, you can see that the macroscopic/microscopic distinction has to be there. You can't apply the rule without the restriction that i \neq j implies |\Psi_i\rangle is macroscopically distinguishable from |\Psi_j\rangle. To see this, take the simple case of a single spin-1/2 particle, where we only consider spin degrees of freedom. A general state can be written as |\psi\rangle = \alpha |u\rangle + \beta |d\rangle. Does that mean that the particle is "really" in the state |u\rangle or state |d\rangle, we just don't know which? No, it doesn't mean that. The state |\psi\rangle is a different state from either |u\rangle or |d\rangle, and it has observably different behavior.

If you eliminate "measurement" as a primitive, then you can't apply the Born rule without making a macroscopic/microscopic distinction.
 
Last edited:
  • Like
Likes MrRobotoToo, zonde, ShayanJ and 1 other person
  • #308
ShayanJ said:
I suppose that's for me!
I meant there was a time where no people were around to observe anything and yet the universe was behaving quantum mechanically. And even in the present, there are parts of the universe where we can't observe and still quantum mechanics applies to them. Unless you're willing to assume quantum mechanics applies only when people are around to observe the result!

Yeah, to me, the minimal interpretation is schizophrenic (whoops! I guess that's no longer a socially or medically acceptable term for split personality) On the one hand, it denies that there is anything special about measurement, and on the other hand, it seems to declare that the whole theory is meaningless without measurements.
 
  • Like
Likes RockyMarciano
  • #309
stevendaryl said:
Yeah, to me, the minimal interpretation is schizophrenic (whoops! I guess that's no longer a socially or medically acceptable term for split personality) On the one hand, it denies that there is anything special about measurement, and on the other hand, it seems to declare that the whole theory is meaningless without measurements.
To be fair one must admit that the origin of this situation lies in the mathematical formulation, the Born postulate gives non-classical probabilities as the only way to get predictions of measurements while the rest of the postulates based on a Hilbert space are classical. The only way to avoid direct schizophrenic contradiction is by adding an ad hoc macroclassical/microquantum cut to the theory that establishes a dependency on the classical theory and of course an unsolvable enigma about the difference between measurement interactions and the rest of interactions.

But in the minimal interpretation(and others) this cut is understood as a continuous smooth approximation from the micro to the macro situations so that there isn't really cut between measurements and the rest of interactions in principle but only the practical difficulties of dealing with the microquantum sizes with our big measurement apparatuses, and they are completely quantum too. This is actually hinging on the correspondence principle of the classical limit of QM(defined as: A more general theory can be formulated in a logically complete manner independently of the less general theory which forms a limiting case of it), based on certain requisites like the Ehrenfest equations among others. The problems is that these requisites depend on the truth of the classical postulates, so the principle is not met, and if those postulates really contradict the Born postulate they are of no use to base the correspondence limit and its smooth connection from micro to macro. One is then obligated to acknowledge the introduction of the cut in the theory in order to avoid inconsistence even if one doesn't think it exists in nature. In the words of Landau: "It is impossible in principle to formulate the basic concepts of QM without using classical mechanics".

Now try to explain this to an experimentalist minded person. As long as the Born rule postulate works they couldn't care less whether it is in contradiction or not with the rest of the postulates, to them it is none of their business.
 
  • #310
stevendaryl said:
Then why does the rule single out measurements?
There is no special interaction associated with measurements in classical theory. Look, try to formulate the Born rule without mentioning "measurement" or a macroscopic/microscopic distinction. It is not possible. In contrast, the rules for classical mechanics can be formulated without mentioning measurement. That doesn't mean that there are no measurements in classical mechanics, but that measurements are a derived concept, not a primitive concept.

We could try to make measurements a derived concept in QM, as well.

What is a measurement? Well, a first cut at this is that a measurement is an interaction between one system (the system to be measured) and a second system (the measuring device) so that the quantity to be measured causes a macroscopic change in the measuring device. Roughly speaking, we have:
  • A system being measured that has some operator O with eigenvalues o_i.
  • A measuring device, which has macroscopic states "S_{ready}" (for the state in which it has not yet measured anything) and "S_i" (for the state of having measured value o_i; each macroscopic state will correspond to a huge number of microscopic states.)
  • The device is metastable when in the "ready" state, meaning that a small perturbation away from the ready state will lead to an irreversible, entropy-increasing transition to one of the states S_i.
  • The interaction between system and measuring device is such that if the system is in a state having eigenvalue o_i, then the device is overwhelmingly more likely to end up in the state S_i than any other macroscopic state S_j. (There might be other macroscopic states representing a failed or ambiguous measurement, but I'll ignore those for simplicity.)
Note: There is a bit of circularity here, in that to really make sense of the Born rule, I have to define what a measuring device is, and to define a measuring device, I have to refer to probability (which transitions are much more likely than which others), which in quantum mechanics has to involve the Born rule. What we can do, though, is treat the fact that the device makes a transition to state S_i when it interacts with a system in a pure state with eigenvalue o_i as initially being a matter of empirical observation, or it can be derivable from classical or semiclassical physics.

Then the Born rule implies that if the system to be measured is initially in a superposition of states of the form: \sum_i c_i |\psi_i\rangle where |\psi_i\rangle is an eigenstate of O with eigenvalue o_i, then the measuring device will, upon interacting with the system, make a transition to one of the macroscopic states S_i with a probability given by |c_i|^2.

Now, at this point, I think we can see where the talk about "measurement" in quantum mechanics is something of a red herring. Presumably, the fact that a pure eigenstate |\psi_i\rangle of the system triggers the measuring device to go into macroscopic state S_i is in principle derivable from applying Schrodinger's equation to the composite system, if we know the interaction Hamiltonian. But because of the linearity of quantum evolution, one would expect that if the initial state of the system were a superposition \sum_i c_i |\psi_i\rangle, then the final state of the measuring device would be a superposition of different macroscopic states, as well. (Okay, decoherence will actually prevent the device from being in a superposition, but the combination system + device + environment would be in a superposition.) So the Born rule for measurements can be re-expressed in what I think is an equivalent form that doesn't involve measurements at all:

If a composite system is in a state of the form |\Psi\rangle = \sum_i c_i |\Psi_i\rangle, where for i \neq j, |\Psi_i\rangle and |\Psi_j\rangle represent macroscopically distinguishable states, then that means that the system is in exactly one of the states |\Psi_i\rangle, with probability |c_i|^2

Note the difference with the usual statement of the Born rule: I'm not saying that the system will be measured to have some eigenvalue \lambda_i with probability |c_i|^2, because that would lead to an infinite regress. You would need a microscopic system, a measuring device to measure the state of the microscopic system, a second measuring device to measure the state of the first measuring device, etc. The infinite regress must stop at some point where something simply HAS some value, not "is measured to have some value".

But with this reformulation of Born's rule, which I'm pretty sure is equivalent to the original, you can see that the macroscopic/microscopic distinction has to be there. You can't apply the rule without the restriction that i \neq j implies |\Psi_i\rangle is macroscopically distinguishable from |\Psi_j\rangle. To see this, take the simple case of a single spin-1/2 particle, where we only consider spin degrees of freedom. A general state can be written as |\psi\rangle = \alpha |u\rangle + \beta |d\rangle. Does that mean that the particle is "really" in the state |u\rangle or state |d\rangle, we just don't know which? No, it doesn't mean that. The state |\psi\rangle is a different state from either |u\rangle or |d\rangle, and it has observably different behavior.

If you eliminate "measurement" as a primitive, then you can't apply the Born rule without making a macroscopic/microscopic distinction.
This is again an example of philosophical misunderstandings. The Born rule doesn't single out ineractions between a measurement device and the system with regard to any other interaction. The same fundamental interactions of the Standard Model are at work always. Of course quantum theory as well as classical theory is about what we are able to observe and measure. So are the probabilities described by quantum theory probabilities for the outcome of measurements of a given observable on a system whose state is given by previous observations or a preparation procedure.

What you describe further is the collapse hypothesis, which I think is only a very special case which almost always doesn't apply and if so it's a measurement device carefully constructed to enable a (good approximation) of a von Neumann filter measurement. I thus don't say, that the system undergoes a transition to a the state ##|\psi_i \rangle \langle \psi_i|## with probability ##|c_i|^2## but simply that I measure ##o_i## with this probability. It may well be that the system is destroyed by the measurement process (e.g., a photon is absorbed when registered by a photodetector or em. calorimeter).

That the measurement device must have the classical properties you describe is also pretty clear since we have to "amplify" the microscopic properties to be able to measure them, but I don't think that there is a distinction between classical and quantum laws on a fundamental level. The classical behavior is derivable from quantum theory by appropriate averaging procedures in the usual sense of quantum statistics. A "macro state" is thus describable as the average over a large number of "micro states". You mention entropy production yourself, but that indeed makes it necessary to neglect information, i.e., to coarse grain to the relevant macroscopic observables.
 
  • #311
atyy said:
Anyway, the basic idea is that unless there is fine tuning, it is unlikely the universe was created in equilibrium.
I think that this idea missis the idea of statistical equilibrium. The system in statistical equilibrium tends to stay close to it, because the majority of all possible states are close to the equilibrium. The statistical equilibrium is nothing but the state of largest entropy. Therefore one does not need fine tuning to have an equilibrium. Just the opposite, one needs fine tuning to be in a state far from equilibrium.
 
  • Like
Likes vanhees71
  • #312
rubi said:
BM is possibly one of the least rational explanations that people have come up with in the history of science.
So what's the most rational interpretation of QM in your opinion? Consistent histories? With non-classical logic (see Griffiths)? Changing the rules of logic is the least rational thing to do for my taste.
 
  • Like
Likes vanhees71
  • #313
Demystifier said:
I think that this idea missis the idea of statistical equilibrium. The system in statistical equilibrium tends to stay close to it, because the majority of all possible states are close to the equilibrium. The statistical equilibrium is nothing but the state of largest entropy. Therefore one does not need fine tuning to have an equilibrium. Just the opposite, one needs fine tuning to be in a state far from equilibrium.

But this assumes a discrete state space. If the state space is not discrete, then there is no unique notion of majority.

Also, it makes no sense to use "majority" as an argument. It is dynamics that is fundamental, not statistical mechanics.
 
  • #314
atyy said:
But this assumes a discrete state space. If the state space is not discrete, then there is no unique notion of majority.
Sure, you have to fix some measure. But in many cases there is a natural choice of measure. For instance, in classical statistical physics for one particle in 3 dimensions the natural measure is ##d^3x d^3p##, which is related to the fact that the phase volume is conserved owing to the Liouville theorem. A similar measure exists for Bohmian mechanics.
 
  • #315
Demystifier said:
Sure, you have to fix some measure. But in many cases there is a natural choice of measure. For instance, in classical statistical physics for one particle in 3 dimensions the natural measure is ##d^3x d^3p##, which is related to the fact that the phase volume is conserved owing to the Liouville theorem. A similar measure exists for Bohmian mechanics.

But the world is manifestly not in thermodynamic equilibrium.
 
  • #316
atyy said:
But the world is manifestly not in thermodynamic equilibrium.
And that's one of the greatest mysteries in statistical physics. A natural state is thermodynamic equilibrium, and nobody knows why exactly we are not in equilibrium.
 
  • #317
Demystifier said:
And that's one of the greatest mysteries in statistical physics. A natural state is thermodynamic equilibrium, and nobody knows why exactly we are not in equilibrium.

Well, at least you are consistent. I've always thought the mystery was why silly things like the canonical ensemble actually work :)
 
  • #318
vanhees71 said:
What you describe further is the collapse hypothesis,

Tell me how my story changes if you don't assume collapse. I think that without collapse, then measurements become only subjective, which is basically the Many Worlds idea.

I would distinguish three aspects of the collapse hypothesis:
  1. After measurement, there is no longer observable interference between alternatives.
  2. Measurement reveals a fact about the world (that a certain observable has a certain value).
  3. After measurement, the composite system (system measured plus measuring device plus environment) is in a state consistent with the value measured, and so from then on, evolves from that state, not the original superposition.
Number 1 is not a necessary assumption, because it is presumably derivable from ordinary quantum mechanics, if you take into account decoherence.

So in denying collapse, are you denying 2? Measurements don't actually reveal anything about the world? I can't believe that. How could a result count as a measurement if it doesn't tell us anything about the world? But if it does tell us something about the world, what is telling us about the world?

As for #3, my discussion didn't mention anything about evolution after the measurement, so that's not relevant.

This is again an example of philosophical misunderstandings.

On the contrary, I think it shows that the minimal interpretation is incoherent as it stands. What am I misunderstanding?
 
Last edited:
  • Like
Likes zonde
  • #319
vanhees71 said:
This is again an example of philosophical misunderstandings. The Born rule doesn't single out ineractions between a measurement device and the system with regard to any other interaction. The same fundamental interactions of the Standard Model are at work always. Of course quantum theory as well as classical theory is about what we are able to observe and measure.
Classical theory is about what is really there. With measurements we check that the theory faithfully describes what is there. So measurements are special in classical theory i.e. measurements can falsify the theory.
In QT the Born rule eliminates solipsism from the theory and puts it on the grounds where it can be checked against reality. If you discard Born rule you can't perform scientific test of QT.
And this is how I understand this phrase from stevendaryl's post:
stevendaryl said:
The infinite regress must stop at some point where something simply HAS some value, not "is measured to have some value".
 
  • #320
vanhees71 said:
You mention entropy production yourself, but that indeed makes it necessary to neglect information, i.e., to coarse grain to the relevant macroscopic observables.

Coarse graining does not explain how measurement results in only one outcome from a superposition of several possible outcomes. That's a misunderstanding on your part.
 
  • #321
vanhees71 said:
What you describe further is the collapse hypothesis, which I think is only a very special case which almost always doesn't apply and if so it's a measurement device carefully constructed to enable a (good approximation) of a von Neumann filter measurement. I thus don't say, that the system undergoes a transition to a the state ##|\psi_i \rangle \langle \psi_i|## with probability ##|c_i|^2## but simply that I measure ##o_i## with this probability. It may well be that the system is destroyed by the measurement process (e.g., a photon is absorbed when registered by a photodetector or em. calorimeter).

I went through that already. Yes, that's what you say, and I claim that it is nonsense. That's the infinite regress that I'm talking about. For the microscopic system, it's not that it is ACTUALLY in state with eigenvalue o_i, it is only that the measuring device MEASURES it to have eigenvalue o_i. But the meaning of "the measuring device measures o_i" is that makes a transition to some corresponding macroscopic state S_i. Now, you want to say that the device isn't ACTUALLY in state S_i, it's just that's I OBSERVE it to be in that state. But since I'm a quantum mechanical system, as well, presumably, I'm not ACTUALLY in the state of "observing the device to be in state S_i", it's that a third person observes me to be observing the device to be in state S_i. Etc.

If measurements are ordinary interactions, then saying that "X observes Y" is a statement about the state of X. X is in the particular state of observing measurement outcome Y. So there is no need to bring in measurements again--it's simply a fact about the system X.
 
  • #322
Demystifier said:
So what's the most rational interpretation of QM in your opinion? Consistent histories? With non-classical logic (see Griffiths)? Changing the rules of logic is the least rational thing to do for my taste.
Consistent histories doesn't change the rules of logic. It specifies rules for how to obtain a single framework. All interpretations need to do this or how do you calculate the probability for ##S_x=1\wedge S_y=-1## in BM? Quantum theory is contextual and no interpretation can deny this fact without changing the predictions of the theory. CH is not really an interpretation of QT. It rather accomplishes the necessary job of telling us, how to obtain probabilities that add up to 1. The interpretational part of CH is to view time evolution as a stochastic process. What you call "changing the rules of logic" is common to all interpretations of QT, even BM. It's just that the CH people first understood, how to deal with it mathematically.

I don't know what's the most rational explanation, but if an interpretation requires a conspiracy of cosmic extent, then practically everything is more rational. The analogy between BM and epicycles is really quite obvious.

stevendaryl said:
Coarse graining does not explain how measurement results in only one outcome from a superposition of several possible outcomes. That's a misunderstanding on your part.
I don't understand why you think that a stochastic theory needs to explain how one outcome is selected. If I have a theory about a classical coin tossing experiment, which specifies the probabilities ##p_{h/t}=\frac{1}{2}##, then "nature is genuinely random and will randomly select one of the possibilities" is a perfectly fine explanation. If that's true, then no further explanation can be possible.
 
  • #324
Demystifier said:
Are you saying that Griffiths
https://arxiv.org/abs/1110.0974
is wrong?
No. The deviation from classical reasoning for quantum phenomena is dictated by experiments. CH explains, how to obtain single frameworks, within which one can resort to classical reasoning and use classical probability theory. No interpretation gets around this. "##S_x=+1\wedge S_y=-1##" is a completely valid conjunction of propositions according to classical logic, so if you claim that it is a valid statement in BM, you should be able to tell me what probability BM assigns to it.
 
  • #325
rubi said:
The analogy between BM and epicycles is really quite obvious.

It's a poor analogy. Epicycles have been shown to be right. BM remains conjectural.
 
  • Like
Likes rubi
  • #326
atyy said:
It's a poor analogy. Epicycles have been shown to be right. BM remains conjectural.
Well, there are two possibilities:
1. BM makes the same predictions as QM, which is what the Bohmians usually claim. In that case, the analogy is spot on.
2. BM makes different predictions than QM. In that case it either contradicts experiments or the different predictions concern only situations that have not been experimentally tested yet. Then, most physicist would expect the QM predictions to be right and the BM predictions to be wrong. If the QM predictions turned out to be wrong, people would be more likely to just adjust the QM model (e.g. modify the Hamiltonian) than to adopt BM (e.g. see neutrino oscillation).
 
  • #327
rubi said:
No. The deviation from classical reasoning for quantum phenomena is dictated by experiments. CH explains, how to obtain single frameworks, within which one can resort to classical reasoning and use classical probability theory. No interpretation gets around this. "##S_x=+1\wedge S_y=-1##" is a completely valid conjunction of propositions according to classical logic, so if you claim that it is a valid statement in BM, you should be able to tell me what probability BM assigns to it.
"##S_x=+1\wedge S_y=-1## (at the same time)" is an invalid statement in BM. It is also an invalid statement in the standard Copenhagen interpretation. And more importantly, there is no experiment which gives ##S_x=+1\wedge S_y=-1## (at the same time). So how can this be dictated by experiments?
 
  • #328
rubi said:
I don't understand why you think that a stochastic theory needs to explain how one outcome is selected. If I have a theory about a classical coin tossing experiment, which specifies the probabilities ##p_{h/t}=\frac{1}{2}##, then "nature is genuinely random and will randomly select one of the possibilities" is a perfectly fine explanation. If that's true, then no further explanation can be possible.

I think you're arguing something different than vanhees is. I am not arguing against the possibility of a stochastic description of physics.

The point about coarse-graining is that if you could actually do the computations to figure out how the composite wave function for system + measuring device + observer + environment (however far it goes) evolves, then you would find that:

If
  • when the system being measured is in a state corresponding to eigenvalue o_1, the measuring device makes a transition to a macroscopic state S_1, and
  • when the system being measured is in a state corresponding to eigenvalue o_2, the measuring device makes a transition to a macroscopic state S_2, then
  • when the system is in a superposition of those two states, then the composite wave function makes a transition to a superposition of those macroscopic states (or mixture, if you like, but I'm using superposition because I'm including the environment in the wave function)
That follows from the linearity of the evolution equations for quantum mechanics. Coarse graining is a mathematical tool for extracting a macroscopic state from a microscopic state. It isn't going to produce a single outcome if the underlying microscopic state reflects a superposition of macroscopically different outcomes.

You can certainly have an additional, stochastic step in which one coarse-grained macroscopic state is selected from the superposition or mixture, but that is an additional step.
 
  • Like
Likes PeterDonis
  • #329
stevendaryl said:
Tell me how my story changes if you don't assume collapse. I think that without collapse, then measurements become only subjective, which is basically the Many Worlds idea.

I would distinguish three aspects of the collapse hypothesis:
  1. After measurement, there is no longer observable interference between alternatives.
  2. Measurement reveals a fact about the world (that a certain observable has a certain value).
  3. After measurement, the composite system (system measured plus measuring device plus environment) is in a state consistent with the value measured, and so from then on, evolves from that state, not the original superposition.
Number 1 is not a necessary assumption, because it is presumably derivable from ordinary quantum mechanics, if you take into account decoherence.

So in denying collapse, are you denying 2? Measurements don't actually reveal anything about the world? I can't believe that. How could a result count as a measurement if it doesn't tell us anything about the world? But if it does tell us something about the world, what is telling us about the world?

As for #3, my discussion didn't mention anything about evolution after the measurement, so that's not relevant.
On the contrary, I think it shows that the minimal interpretation is incoherent as it stands. What am I misunderstanding?

After a measurement usually you have a "pointer reading", i.e., a macroscopic observable but you are far from knowing the precise quantum state of the composite system. There's no collapse in the sense of some Copenhagen flavors, because this violates locality and causality implemented in relativstic QFT.
 
  • #330
vanhees71 said:
After a measurement usually you have a "pointer reading", i.e., a macroscopic observable but you are far from knowing the precise quantum state of the composite system.

That's what the macroscopic states S_i are: pointer readings. So are you saying that after the measurement, the device is in one of the states S_i with probability |c_i|^2? How is that not a collapse?
 

Similar threads

  • · Replies 376 ·
13
Replies
376
Views
21K
Replies
9
Views
3K
  • · Replies 109 ·
4
Replies
109
Views
11K
  • · Replies 37 ·
2
Replies
37
Views
3K
Replies
13
Views
2K
  • · Replies 159 ·
6
Replies
159
Views
13K
  • · Replies 235 ·
8
Replies
235
Views
24K
Replies
6
Views
2K
  • · Replies 491 ·
17
Replies
491
Views
36K
  • · Replies 74 ·
3
Replies
74
Views
4K