Can the Born Rule Be Derived in the Many Worlds Interpretation?

Click For Summary
Sean Carroll's paper explores the derivation of the Born Rule within the Many Worlds Interpretation (MWI) of quantum mechanics, addressing the challenge of applying probabilities to a deterministic universe. The discussion highlights the importance of Gleason's Theorem, which suggests that the Born Rule is the only consistent probability rule dependent solely on the wave function. Some participants express skepticism about the derivations, arguing they rely on axioms that may be circular, particularly concerning the tensor product structure of Hilbert spaces. The conversation also touches on the conceptual difficulties in understanding quantum mechanics, especially regarding the nature of measurement and probability. Overall, the discourse emphasizes the ongoing debate about the foundations of quantum mechanics and the implications of these interpretations.
  • #31
Lubos wrote up an analysis of Carroll's paper that I'd say should factor into this discussion.

For example:
An example of Carroll-Sebens circular reasonining is that they assume that small off-diagonal entries of a density matrix may be neglected – they assume it before they derive or admit that the small entries correspond to probabilities. That's, of course, illegitimate. If you want to replace a small quantity by zero, and to be able to see whether the replacement is really justified, you have to know what the quantity actually is. Moreover, these things are only negligible if classical physics becomes OK, so whatever you do with this approximation is clearly saying nothing whatsoever about the intrinsic, truly quantum, properties of quantum mechanics in the quantum regime!

...

The other obvious problem with the ESP quote above is that it says what the "credence" is independent of. But a usable theory should actually say what it does depend upon. Ideally, one should have a formula. If one has a formula, one immediately sees what it depends upon and what it doesn't depend upon. A person who actually has a theory would never try to make these unnecessarily weak statements that something does not depend on something else. Isn't it far more sensible and satisfactory to say what the quantity does depend upon – and what it's really equal to? Quantum mechanics answers all these questions very explicitly, Carroll and Sebens don't.
 
Physics news on Phys.org
  • #32
atyy said:
But doesn't the definition of A and B being independent hold regardless of the tensor product structure and the Born rule?

Fredrik said:
Independence is the result P(a & b)=P(a)P(b). This result follows from the tensor product stuff. Perhaps it also follows from something else. This is why this argument doesn't prove that we need to use tensor products, and that's why the Aerts & Daubechies argument is so appealing. It does prove that (given some assumption that are rather difficult to understand), we have to use tensor products.

Judging from your quotes it seems you think there is a way to do probability without bilinear forms? But any probability P(a) is a bilinear form, e.g.:

https://www.physicsforums.com/attachment.php?attachmentid=71661&stc=1&d=1406376963
(Parthasarathry - Quantum Stochastic Calculus P. 1)

and because of this bilinear form-ness, P(a) = g(a,a) = <a,ψ(a)> immediately implies an isomorphism to a tensor product

https://www.physicsforums.com/attachment.php?attachmentid=71662&stc=1&d=1406377319
(Lang Linear Algebra 2nd Edition appendix)

so it looks to me like you're just unavoidably using tensor product structure no matter what you do, i.e. if you do something else or ignore the tensor product structure, it's there anyway (no matter what symbols you use), no?

I could be wrong, but it looks to me like you just can't use probabilities without using bilinear forms and tensor products. Once that is established, we add more structure when exhibiting something like
P(a)P(b) = g(a,a)h(b,b) = <a,ψ(a)><b,φ(b)> = ... = P(a&b)
it's going to have to use more tensor products to give some meaning to whatever the & is supposed to mean (in the case of independence you exploit Pythagoras, but this special case lives in a structure built into bilinear forms which are an unavoidable consequence of introducing the very notion of probability).

If you admit the notion of probability exists, Parthasarathry's little calculation (or it in the limit) makes it completely obvious the Born rule will exist (if you've decided to base your mechanics on the notion of a state vector, which just falls out of the notion of probability...), and a similar point is made by Lubos:

Again, we need to know that mutually exclusive states are orthogonal and the probability has something to do with the length of a state vector (or its projection to a subspace).

That's everything we need to assume if we want to prove Born's rule.

...

That's the real reason why Born's rule works. The probabilities and mutual exclusiveness has to be expressed as a mathematical function or property of state vectors and the totally general rules for probabilities (like the additive behavior of probabilities under "or") heavily constrain what the map between the "human language" (probability, mutual exclusiveness) and the "mathematical properties" can be. The solution to these constraints is basically unique. The probabilities have to be given by the second powers of the moduli of the complex probability amplitudes. It's because only such "quadratic" formulae for the probabilities obey the general additive rules, thanks to the Pythagorean theorem.
 
  • #33
bolbteppa said:
Lubos wrote up an analysis of Carroll's paper that I'd say should factor into this discussion.
An example of Carroll-Sebens circular reasonining is that they assume that small off-diagonal entries of a density matrix may be neglected – they assume it before they derive or admit that the small entries correspond to probabilities. That's, of course, illegitimate. If you want to replace a small quantity by zero, and to be able to see whether the replacement is really justified, you have to know what the quantity actually is. Moreover, these things are only negligible if classical physics becomes OK, so whatever you do with this approximation is clearly saying nothing whatsoever about the intrinsic, truly quantum, properties of quantum mechanics in the quantum regime!​
For example:

That's the complaint I've always had about decoherence. Basically, decoherence can be used to argue that we can effectively treat macroscopic superpositions as proper mixtures because the interference terms are negligible. But "negligible" is relative to an interpretation of amplitudes as probabilities (when squared).

I wouldn't call such arguments completely circular--maybe they're "semi-circular". What they show is that the interpretation of amplitudes via the Born rule is robustly self-consistent. There are lots of different ways to think about making sense of probabilities in QM, and they all boil down to something that is consistent with the Born rule. More than that, there doesn't seem to be any alternative to the Born rule that is at all plausible. But logically speaking, I don't think the Born rule can be proved starting with a theory that doesn't have it.
 
  • #34
bhobba said:
Plausibility isn't something objective - its not something that's really provable - its either plausible to you - or isn't. Given the situation in MW I find the idea of using decision theory to figure out some kind of utility to help in deciding the likelihood of experiencing a certain world quite reasonable. Its actually similar to the Bayesian view of probabilities.

The quote from Wallace in post #21 suggests that the remaining issue is about something objective, and not plausibility.
 
  • #35
stevendaryl said:
Look at a question about an ordinary electron: Between measurements, does it have spin-up in the z-direction, or spin-down? Either it has a definite answer, or it doesn't. Bell's inequality seems to imply that in certain circumstances, it just doesn't have a definite answer (between measurements, anyway).

Now you ask the same question about a measurement result. Did Alice measure spin-up, or spin-down? Decoherence basically shows us that we can pretend, for all intents and purposes, that it has a definite answer. But if there is nothing special about measurement, then measurement can't be any more definite than an electron's spin. Not at a fundamental level.
I have no objections against this.

stevendaryl said:
You have to add something to QM (something special about measurement that collapses the wave function) to AVOID many-worlds.
This is where we disagree. It's clear what the source of the disagreement is. What's QM to you is "QM plus an unnecessary assumption" to me. I agree that your version of QM is a many-worlds theory.

However, I wouldn't say that we can avoid many worlds by adding a collapse axiom. I think that your version of QM plus a collapse axiom would just be inconsistent, but we can perhaps find a similar theory that makes similar predictions and contains a collapse mechanism. I would consider this an alternative to QM, not an interpretation of QM.

stevendaryl said:
The same approach doesn't make sense in QM, because results such as Bell's theorem imply that we CAN'T assume that the objects under study have definite but unknown properties. (Not without nonlocal interactions, anyway).
I think you're giving Bell's theorem too much credit here. The familiar spin-1/2 inequality tells us that a theory (different from QM) in which the state fully determines the outcome of each measurement with a Stern-Gerlach device, can't make the same predictions as QM. Since experiments agree with QM, this means that all such theories have been falsified.

This is a cool result, but to argue that it forces us to identify systems with pure states, you have to jump to the conclusion that this also rules out that the reason why QM works is that systems are doing things that aren't described in detail by QM. I really don't think Bell's theorem is strong enough to do anything like that. It rules out a class of ontological models for QM, but it doesn't say a whole lot about whether an even better theory could exist.
 
Last edited:
  • #36
I am interested in understanding more about whether there is anything special about a measurement. Does a measurement always involve an increase in entropy? Is that what makes it special?
 
  • #37
Jilang said:
I am interested in understanding more about whether there is anything special about a measurement. Does a measurement always involve an increase in entropy? Is that what makes it special?

In Copenhagen, a measurement is axiomatically special, even before entropy is defined. It is a fundamental concept, and it is assumed that we intuitively know what it means in terms of our everyday language about our actions in the world we sense.

In Many-Worlds, if I understand correctly, a measurement is not a fundamental concept, and is argued to emerge when the observer and system are decohered by the environment.
 
  • #38
Jilang said:
I am interested in understanding more about whether there is anything special about a measurement. Does a measurement always involve an increase in entropy? Is that what makes it special?
What's "special" about a measurement is just that some kind record of the outcome is created, and isn't immediately destroyed. If you measure something, then the final state of the measuring device is such a record, and so is the memory of the result that's created in your brain.
 
  • #39
Thanks. Would it be fair to say that the record made was non-reversible then?
 
  • #40
stevendaryl said:
The difference with QM is that statements such as "The electron has spin-up in direction x" not only have an unknown truth value, but they don't have a truth value. They can't have a truth value--that would be a hidden variable, which Bell's theorem rules out (again, if we disallow nonlocal interactions). I don't see how probabilities in the QM can reflect ignorance of a truth value if the truth value doesn't exist. If you ask "What color is the real number pi?" there is no answer--pi doesn't have a color. It doesn't make sense to say that it has a 20% probability of being red.

Quantum theory tells you precisely the alternatives concerning the statement "The electron has spin up in direction x": Either the electron is prepared somehow in the eigenstate |\sigma_x=+1/2 or it is prepared in the eigenstate |\sigma_x=-1/2 of the operator representing the spin-x component or it is in neither one of these eigenstates. Then it is in another spin state \hat{R}_{\sigma_x}. In all cases the probabilities for the outcome of a measurement of the spin-x component to be +1/2 ("up") are predicted by quantum theory, and you cannot know more about the value of the spin-x component than the probability of its occurance when measured. These probabilistic statements about the outcome of measurements, given the state of the system, represented by a statistical operator, can be checked experimentally by preparing ensembles of systems in this state and then measuring the spin-x component.

stevendaryl said:
That's my complaint about probabilities in QM. It doesn't make sense to say that probabilities reflect ignorance about system properties if our theory tells us that the system just doesn't HAVE those properties.

I know that after a measurement, it seems to be the case that "The electron was measured to have spin-up in direction x" is either true or false. So it seems that the statement has a definite truth value, afterward, so we can apply probabilities in the same way we do classically, to reflect our ignorance about the truth value of a statement that has (or will have) a definite truth value. But that's where the issue of whether there is something special about measurement comes in. If "The electron has spin-up in the x-direction" has no truth value before the measurement, and there is nothing special about measurements, then why should "The electron was measured to have spin-up in the x-direction" have a definite truth value?

The only difference between classical physics and quantum physics is that in classical physics all observables always have definite values, be they known or unknown to a physicist, while in quantum theory any observables can have a definite value or not, depending on how the system was prepared. It's of course a bit unintuitive in the beginning of learning quantum theory that observables can be indetermined, but that's how Nature is.
 
  • #41
vanhees71 said:
The only difference between classical physics and quantum physics is that in classical physics all observables always have definite values, be they known or unknown to a physicist, while in quantum theory any observables can have a definite value or not, depending on how the system was prepared. It's of course a bit unintuitive in the beginning of learning quantum theory that observables can be indetermined, but that's how Nature is.

No. For example in Copenhagen and the Ensemble interpretation, one has to divide the universe into a classical part and a quantum part. There is no such division in classical physics, and one can imagine the laws of physics applying to the whole universe. However, Copenhagen and Ensemble interpretations do not assign any meaning to "the wave function of the universe". (Ballentine, as far as I can tell, fails to mention this classical/quantum division, which is foundational for the Ensemble interpretation. It's unclear whether he even knows it, which is why his discussion on measurement and wave function collapse is so misleading. Landau and Lifshitz, and Weinberg make this point clear.)
 
  • #42
Jilang said:
Thanks. Would it be fair to say that the record made was non-reversible then?
In principle, the time evolution of an isolated system is always reversible, but I think that the state of the environment (in which the record is stored) can't (even in principle) be reversed without also reversing the state of the measured object.
 
  • #43
Jilang said:
I am interested in understanding more about whether there is anything special about a measurement. Does a measurement always involve an increase in entropy? Is that what makes it special?

I have been looking around...Stevendaryl, I think you asked a similar question here.
https://www.physicsforums.com/showthread.php?t=668090
I don't think your question was answered satisfactorily, since any entropy decrease due to the information gain is outweighed many times by the increase in the entropy of physical system.
 
  • #44
Fredrik said:
In principle, the time evolution of an isolated system is always reversible, but I think that the state of the environment (in which the record is stored) can't (even in principle) be reversed without also reversing the state of the measured object.

I am wondering how you could remove the remove the record on a photographic film for example. Is that what you are saying?
 
  • #45
atyy said:
in Copenhagen and the Ensemble interpretation, one has to divide the universe into a classical part and a quantum part.
[...]
(Ballentine, as far as I can tell, fails to mention this classical/quantum division, which is foundational for the Ensemble interpretation. It's unclear whether he even knows it, which is why his discussion on measurement and wave function collapse is so misleading. Landau and Lifshitz, and Weinberg make this point clear.)
I think that division is what's misleading. It suggests that measuring devices are fundamentally different, when in fact they're not.

The idea that measurements must have results doesn't belong to any particular interpretation. It's part of the definition of science. Scientific theories must be falsifiable. To be falsifiable they must assign probabilities to specific final states of interactions such that several humans can agree what final state has been obtained.

If you succeed at putting a two-detector measuring device in a superposition of "left detector clicked" and "right detector clicked", then you haven't produced a final state that a human is capable of interpreting as a result of a (position) measurement. One reason is that the interaction that stores the memory of the observed result in your brain is subject to decoherence. If you're the person making that observation, then from my point of view, your state almost instantly decoheres from "superposition of remembering left and remembering right" to the mixed state "either left or right". And you wouldn't be able to prepare the superposition anyway, because it would decohere to a mixed state almost instantly.
 
Last edited:
  • #46
Jilang said:
I am wondering how you could remove the remove the record on a photographic film for example. Is that what you are saying?
That sort of record is certainly impossible to reverse in practice. And it wouldn't do you any good, since records of the result of the measurement tend to be stored all over the place: on the plate, in your brain, and even in the air.

The claim that "everything is reversible" refers only to the fact that the time evolution operator ##e^{-iHt}## has the inverse ##e^{iHt}##. Unless we're talking about an interaction between two qubits or something like that, it's impossible in practice to carry out the reversal. And it's usually impossible even in principle to reverse the time evolution of only one of the subsystems (because it's more complicated than a unitary transformation).
 
Last edited:
  • #47
Fredrik said:
I think that division is what's misleading. It suggests that measuring devices are fundamentally different, when in fact they're not.

The idea that measurements must have results doesn't belong to any particular interpretation. It's part of the definition of science. Scientific theories must be falsifiable. To be falsifiable they must assign probabilities to specific final states of interactions such that several humans can agree that the final state has been obtained.

If you succeed at putting a two-detector measuring device in a superposition of "left detector clicked" and "right detector clicked", then you haven't produced a final state that a human is capable of interpreting as a result of a (position) measurement. One reason is that the interaction that stores the memory of the observed result in your brain is subject to decoherence. If you're the person making that observation, then from my point of view, your state almost instantly decoheres from "superposition of remembering left and remembering right" to the mixed state "either left or right".

But that assumes that "the wave function of the universe" (system + measurement device + environment) makes sense. In neither Copenhagen nor Ensemble interpretations does that make sense. Many-worlds is the attempt to be able to assign "the wave function of the universe" a meaning.
 
  • #48
atyy said:
But that assumes that "the wave function of the universe" (system + measurement device + environment) makes sense.
Why would we need to assign a wavefunction to the universe? I agree that we can't, but I don't see a reason to think of that as a problem. We're talking about a generalized probability theory. All it does is to assign probabilities to results of measurements, so we can't expect it to be useful in scenarios where measurements aren't possible in principle.
 
  • #49
Fredrik said:
Why would we need to assign a wavefunction to the universe? I agree that we can't, but I don't see a reason to think of that as a problem. We're talking about a generalized probability theory. All it does is to assign probabilities to results of measurements, so we can't expect it to be useful in scenarios where measurements aren't possible in principle.

Well, the idea is that in classical physics, we don't talk about measurements in formulating a theory. We write down eg. an action specifying the fields and their interactions. We assume that the whole universe can be described by the theory, and because the measurement device is in the universe it is also described by the theory.

However, in quantum theory, we seem to have a problem in extending the wave function to the whole universe. Because of this, in Copenhagen and Ensemble interpretations we do have to make this classical/quantum cut. It is true that in decoherence we can shift the classical quantum cut so that the environment+measuring device+system are quantum, but then we still need a classical realm outside of that, unless the wave function of the universe makes sense.

It is of course fine to say QM is not a theory of the whole universe, and it is only an effective theory. But that is equivalent to there being a fundamental problem with believing in QM as a final theory, ie. QM has a limitation which is inherent from its structure, even though it is not falsified by experiments. In contrast, it is possible to conceive of classical theories in which there is no limitation due to the structure of the theory, and we necessarily rely on experiment to falsify such theories.
 
  • #50
Fredrik said:
The idea that measurements must have results doesn't belong to any particular interpretation. It's part of the definition of science. Scientific theories must be falsifiable. To be falsifiable they must assign probabilities to specific final states of interactions such that several humans can agree what final state has been obtained.

To me, that seems kind of weird reasoning, to say such and such must be true, because otherwise, we would have a hard time doing science. The world isn't required to accommodate our needs.
 
  • #51
Fredrik said:
Why would we need to assign a wavefunction to the universe? I agree that we can't, but I don't see a reason to think of that as a problem. We're talking about a generalized probability theory. All it does is to assign probabilities to results of measurements, so we can't expect it to be useful in scenarios where measurements aren't possible in principle.

That's what doesn't make sense to me. If there is nothing special about measurement, it's just a complicated interaction between one system--the system of interest--and another system--the recording device/observer, then giving probabilities for measurement results seems like it must amount to giving probabilities for certain configurations of physical systems. It seems to me that either measurement is special, or it's not. If measurement is not special, then why would theory specifically describe probabilities for results of measurements, and not describe other sorts of interactions.
 
  • #52
atyy said:
Well, the idea is that in classical physics, we don't talk about measurements in formulating a theory. We write down eg. an action specifying the fields and their interactions. We assume that the whole universe can be described by the theory, and because the measurement device is in the universe it is also described by the theory.
Right, these theories describe fictional universes with properties in common with our own. (If you prefer, they are approximate descriptions of our universe). But when you think about what a theory must be like in order to be falsifiable, you see that it doesn't need to describe a universe. It just needs to assign probabilities to possible results of measurements.

atyy said:
However, in quantum theory, we seem to have a problem in extending the wave function to the whole universe.
Yes, but there's no need to, if you just stop thinking of QM as a theory of the first kind, and accept that it's a theory of the second kind. The problem that you're referring to, and all the other "problems" with QM, are consequences of the unnecessary assumption that I'm rejecting.

To clarify: The unnecessary assumption is the identification of pure states with possible configurations of the system. This assumption takes a theory of the second kind (an assignment of probabilities) and pretends that it's a theory of the first kind (a description). And the result is a disaster. Suddenly we have a "measurement problem", and a need to be able to associate a pure state with the universe. This leads inevitably to many worlds. (As I mentioned in a previous post, I don't think we can avoid many worlds by adding a collapse axiom. That just makes everything even worse).

atyy said:
Because of this, in Copenhagen and Ensemble interpretations we do have to make this classical/quantum cut.
All we have to do is to say which measuring device is supposed to tell us the result of the experiment. I wouldn't describe this as a classical/quantum cut.

atyy said:
It is true that in decoherence we can shift the classical quantum cut so that the environment+measuring device+system are quantum, but then we still need a classical realm outside of that, unless the wave function of the universe makes sense.
If what you mean by a classical realm is something like semi-stable records of the results of certain interactions, then yes. If you mean something truly classical, then no.
 
  • #53
stevendaryl said:
then why would theory specifically describe probabilities for results of measurements, and not describe other sorts of interactions.

It describes probabilities that particular states will be reached after an interaction. Those states that are especially useful to us, we call "measurement results".
 
  • #54
stevendaryl said:
To me, that seems kind of weird reasoning, to say such and such must be true, because otherwise, we would have a hard time doing science. The world isn't required to accommodate our needs.
I didn't say that reality is a certain way because of science. I only said that a scientific theory needs to be falsifiable. It's the idea that a good theory has to be more than that that's wishful thinking. QM is a perfectly fine generalized probability theory, but people are still looking for ways to interpret it as a description of a universe, presumably because they really want QM to be a description of a universe.

stevendaryl said:
If measurement is not special, then why would theory specifically describe probabilities for results of measurements, and not describe other sorts of interactions.
The theory has to assign probabilities to something that people can think of as "results", in order to be falsifiable. (A "theory" that doesn't do that isn't a theory). Some interactions produce "results", and some don't. OK, strictly speaking, none of them does, but some interactions produce states that are for practical purposes indistinguishable from classical superpositions. This is the sort of interaction I have in mind when I (somewhat sloppily) say that some interactions produce results. The ones that do can be considered measurements. So in that specific sense, measurements are "special", but they're not fundamentally different. We just put the "measurement" label on those interactions that are the most useful when we test the theory.

I can't answer the question of why our best theory would be one that assigns probabilities to measurements, and describes what's happening just after a state preparation and just after a measurement, but doesn't describe what's happening between state preparation and measurement. It's certainly counterintuitive, but so are the alternatives.
 
  • #55
Fredrik said:
Right, these theories describe fictional universes with properties in common with our own. (If you prefer, they are approximate descriptions of our universe). But when you think about what a theory must be like in order to be falsifiable, you see that it doesn't need to describe a universe. It just needs to assign probabilities to possible results of measurements.

Yes, that is enough for quantum mechanics to be a successful theory. That is what the Copenhagen interpretation says. The measurement problem then is:

(1) Is there a theory of the first type that can underlie quantum mechanics? Historically, von Neumann claimed to prove that this is impossible, ie. there is no "real state" of the system represented by the wave function. In fact, von Neumann's proof was in error, and Bohmian mechanics supplied a concrete example of a theory of the first type for non-relativistic quantum mechanics.

(2) In Copenhagen the wave function is just a calculating device for making predictions about subsystems of the universe, and does not represent the "real state" of the system. However, is it really the case that the wave function cannot represent the complete real state of the system? Many-worlds investigates this possibility.

Fredrik said:
Yes, but there's no need to, if you just stop thinking of QM as a theory of the first kind, and accept that it's a theory of the second kind. The problem that you're referring to, and all the other "problems" with QM, are consequences of the unnecessary assumption that I'm rejecting.

To clarify: The unnecessary assumption is the identification of pure states with possible configurations of the system. This assumption takes a theory of the second kind (an assignment of probabilities) and pretends that it's a theory of the first kind (a description). And the result is a disaster. Suddenly we have a "measurement problem", and a need to be able to associate a pure state with the universe. This leads inevitably to many worlds. (As I mentioned in a previous post, I don't think we can avoid many worlds by adding a collapse axiom. That just makes everything even worse).

As long as the wave function does not represent the real state of the system, then there is no problem with adding collapse. In fact, collapse or an equivalent axiom is necessary in quantum mechanics to describe the results of filtering experiments.

The measurement problem then is: Why can't the wave function represent the state of the system?

Is it because the system has no state (no quantum reality)? This would be an extremely surprising answer. If this answer is acceptable, there is no measurement problem (but there is a difficulty with this answer if one believes that there is a common-sense "classical" reality in the part of the universe not covered by the wave function). If the answer is not acceptable, then there is a measurement problem and the question is to supply examples of the state underlying the wave function.

Fredrik said:
All we have to do is to say which measuring device is supposed to tell us the result of the experiment. I wouldn't describe this as a classical/quantum cut.

Fredrik said:
If what you mean by a classical realm is something like semi-stable records of the results of certain interactions, then yes. If you mean something truly classical, then no.

Yes, I agree, but by convention these are called the classical/quantum cuts. In a sense, it isn't reasonable to object to a classical/quantum cut being misleading, while also acknowledging that the wave function cannot extend to the whole universe. If the wave function cannot extend to the whole universe, and there is always a cut, why is not reasonable to place the cut between the measuring device and the quantum system?

The measurement problem is the conflict between:
1) the idea that a cut somewhere seems necessary, since we cannot write a wave function of the universe
2) the idea that it is unreasonable to place the cut between the measuring device and quantum system, since the measuring device is also quantum
 
Last edited:
  • #56
vanhees71 said:
The only difference between classical physics and quantum physics is that in classical physics all observables always have definite values, be they known or unknown to a physicist, while in quantum theory any observables can have a definite value or not, depending on how the system was prepared. It's of course a bit unintuitive in the beginning of learning quantum theory that observables can be indetermined, but that's how Nature is.

It's much weirder than having a definite value because of the way it is prepared. In a spin-1/2 EPR experiment, Alice can measure the spin of the electron in the z-direction and find out that it is spin-up. At that point, the twin positron is as much definitely spin-down in the z-direction if it had been prepared that way.

That's a lot different than classical probabilities. I certainly agree that that's how Nature is, but what exactly it means is pretty mysterious, to me, at any rate.
 
  • #57
Jilang said:
Thanks. Would it be fair to say that the record made was non-reversible then?

That's exactly what makes something a measurement, irreversibility. Actually, there doesn't actually have to be a measurement made at all, in the strict sense. That's what decoherence says. If a quantum system undergoes an irreversible interaction with the environment, or any system, then after that point, for all intents and purposes, the wave function has collapsed.

But at a fundamental level, irreversibility is not a very satisfying condition for wave function collapse, because nothing is REALLY irreversible. It's just that as the number of particles involved grows, the conditions necessary to reverse the interaction rapidly becomes impossible in practice.

That's another thing that I find unsatisfying about the foundations of quantum mechanics. If the axioms involve measurement, then that means that they indirectly involve irreversibility. It seems a little weird to me that the fundamental axioms describing electrons (say) must involve macroscopic concepts such as irreversibility.
 
  • #58
Yes, it's very different. Quantum theory implies correlations (called entanglement) that are undescribable within a local deterministic theory (Bell's theorem).

On the other hand, given quantum theory there's nothing "mysterious" about the spin-1/2 EPR experiment you describe. The electron positron pair was created to be in an entangled state like
|\Psi \rangle=\frac{1}{\sqrt{2}} (|1/2,-1/2 \rangle-|1/2,+1/2 \rangle).
Then the single electron at Alices detector is described as a totally unpolarized state, i.e., through the Statistical Operator
\hat{\rho}_{\text{Alice}}=\frac{1}{2} \hat{1}.
The same holds true for Bob's positron.

Nevertheless the two-particle system is in an entangled state due to some preparation procedure (say the decay of a neutral particle like a \rho meson). This implies indeed, what you write, namely that as soon as Alice measures the electron's spin-z component to be +1/2, Bob's positron must have a spin-z component of -1/2. There is, however nothing mysterious here. The correlation was due to the preparation of the electron-positron pair and not through Alice's measurement of the electron's spin-z component. There is no "spooky action at a distance" as Einstein put it.

This example also illustrates quite well the reason, why a collapse interpretation is at least problematic (I think, the assumption of a spontaneous collapse, which is outside of the dynamics of quantum theory is inconsistent with Einstein causality as is a "cut" between a quantum and a classical world; everything is quantum, the appearance of a classical behavior is due to a coarse grained observation of macroscopic observables of objects of macroscopic scales and well-understood from quantum-many body theory). If you assume that it's Alice's measurement which causes Bob's positron to spontaneously get a determined spin-z component, you indeed violate Einstein causality, because no signal can travel faster with the speed of light to make Bob's spin determined although initially it was completely indetermined. Within the minimal interpretation, there is no problem, because you take the Born interpretation of states really seriously, i.e., before Alice's measurement the spin-z component of both the electron and the positron were (even maximally) undetermined, but due to the preparation in an entangled state, the correlations are already implemented when the electron-positron pair were prepared. Of course, such a thing is not describable with local deterministic hidden-variable theories, and as long as nobody finds a consistent non-local deterministic theory which is as successful as QT, I stick to (minimally interpreted) QT :-).
 
  • #59
Nugatory said:
It describes probabilities that particular states will be reached after an interaction. Those states that are especially useful to us, we call "measurement results".

I was just responding to Fredrick's comment that quantum mechanics can't be applied in situations that don't involve measurement. To the extent that measurement is an ordinary interaction, we can certainly apply quantum mechanics anywhere, whether there are any observers, or not. If we're dealing with something like the effects of quantum mechanics on the early universe, soon after the Big Bang, then we certainly can't talk about measuring devices and observers.

That's the nice thing about decoherence is that you don't need observers or measuring devices. Probabilities show up as the transition from pure states to mixed states after tracing over environmental degrees of freedom. Of course, the final step, from a mixed state to a definite outcome, never happens without observers/measurement devices. Which makes me think it never happens at all. (Which is the Many Worlds view)
 
  • #60
vanhees71 said:
Within the minimal interpretation, there is no problem, because you take the Born interpretation of states really seriously, i.e., before Alice's measurement the spin-z component of both the electron and the positron were (even maximally) undetermined, but due to the preparation in an entangled state, the correlations are already implemented when the electron-positron pair were prepared. Of course, such a thing is not describable with local deterministic hidden-variable theories, and as long as nobody finds a consistent non-local deterministic theory which is as successful as QT, I stick to (minimally interpreted) QT :-).

That kind of correlation/entanglement is present in classical probability, as well. I have a red ball and a black ball, and I put each into a sealed box, and mix up the boxes. I send one box to Alice and one box to Bob. Then before either opens his or her box, we can describe the situation as follows:

  • The probability that Alice has a red ball is 1/2.
  • The probability that Alice has a black ball is 1/2.
  • The probability that Bob has a red ball is 1/2.
  • The probability that Bob has a black ball is 1/2.
  • The probability that they both have red balls is 0.

So the probability distribution is "entangled". It's nonlocal, in the sense that the last probability involves distant events. If either Alice or Bob opens his or her box, immediately the appropriate probability distribution "collapses".

However, such entanglement is understood classically by saying that "probabilities" don't refer to anything objective, but instead refer to our lack of information about the true state of the world. The true state either has Alice with a red ball and Bob with a black ball, or vice-verse. We just don't know the true state.

In certain ways, probabilities in QM seem very similar to the subjective probabilities of classical probability theory. Alice measures spin-up in the x-direction and immediately knows that Bob is going to measure spin-down in the x-direction. That seems exactly analogous to the classical case of Alice opening her box and finding a red ball, and immediately knowing that Bob has a black ball. So the quantum case should be no more mysterious than the classical case...

Except that we don't have the classical explanation for the correlation. Classically, the explanation is that the ball already had a color prior to opening the box, and opening it only revealed its color. But we don't have the same resolution in the quantum case. We can't assume that the electron already has a spin in the x-direction, we just don't know what it is.
 

Similar threads

  • · Replies 34 ·
2
Replies
34
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 47 ·
2
Replies
47
Views
6K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 120 ·
5
Replies
120
Views
12K