Can the Born Rule Be Derived in the Many Worlds Interpretation?

In summary, Sean Carroll has written a paper explaining how it is possible to derive the Born Rule in the Many Worlds Interpretation of Quantum Mechanics. I'm not sure that it's the final word on the subject, but it does grapple with the big question of how it makes sense to use probabilities to describe a universe that evolves deterministically.
  • #1
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,938
2,945
I'm not 100% sure that this is the right forum to post this, but I think that the people who read the Quantum Physics forum might be interested:

Sean Carroll has written a paper explaining how it is possible to derive the Born Rule in the Many Worlds Interpretation of Quantum Mechanics. I'm not sure that it's the final word on the subject, but it does grapple with the big question of how it makes sense to use probabilities to describe a universe that evolves deterministically.

http://www.preposterousuniverse.com...hanics-is-given-by-the-wave-function-squared/
 
  • Like
Likes Demystifier
Physics news on Phys.org
  • #2
Very nice. Note to self: Need to read this many times.
 
  • #3
I'm working through the paper http://arxiv.org/pdf/1405.7907v2.pdf at the moment, and I'm finding it a bit more understandable than the blog post. So if you're comfortable with the formalism of quantum mechanics, but uncomfortable with the detail in the blog, I suggest the paper!
 
  • Like
Likes Michael Price
  • #4
I have had a look at these proofs of the Born rule in MW before. Wallace gives a lot of detail in his book on it which I forked out for (tried to find it the other day but couldn't - it's in the land of the lost right now :cry::cry::cry::cry:):
http://users.ox.ac.uk/~mert0130/books-emergent.shtml

Anyway when I read it I did carefully go through his proof and its possible objections. For me the bottom line was the waging strategy he used amounted to basis independence which is in fact the key to Gleason's theorem. If the measure doesn't depend on the basis it belongs to (ie non contextuality) then Gleason applies and you get Born.

Sean also seems to agree:
'There is a result called Gleason’s Theorem, which says roughly that the Born Rule is the only consistent probability rule you can conceivably have that depends on the wave function alone. So the real question is not “Why squared?”, it’s “Whence probability?”'

Where I would differ from Sean and maybe Wallace is 'Whence probability'. From my applied math background studying things like statistical modelling etc I view determinism as simply a special case of probability - but the only probabilities allowed are 0 and 1. So if you ask the question what probability measure can be defined and apply Gleason you are not assuming probability. It just turns out that the measure you do come up with can't have 0 and 1 only defined on it - that's a simple corollary of Gleason - but for some reason it's not usually presented that way - and you have this colouring proof of Kochen-Sprecker - which always struck me as rather strange - maybe they don't want to get into the difficult math of Gleason's usual proof - but the new one based on POVM's is dead simple.

To me this proved QM is inherently probabilistic.

That said, I have discussed this before and others think I am assuming probability to begin with so can't draw that conclusion. Don't see it myself, but it's their view.

Thanks
Bill
 
Last edited by a moderator:
  • Like
Likes 1 person
  • #5
I'm not a fan of these "derivations". I think they're based on some pretty major misconceptions. First of all, they all use the axiom that the Hilbert space of a composite system is the tensor product of the Hilbert spaces of the subsystems. How did that axiom find its way into the foundations of QM in the first place? The answer is that it ensures that probabilities assigned by the Born rule follow this standard rule of probability theory: P(A & B)=P(A)P(B) when A and B are independent.

A derivation of the Born rule (or the fact that the theory should assign probabilities) that relies on the Born rule is of course circular. You can argue that the tensor product stuff has also been derived by Aerts and Daubechies, and that they didn't directly use the Born rule. But their approach is based on another set of axioms, and I think they too can (and should) be justified using the Born rule. So I still think these derivations are circular.

Second, I think it's very naive to think that QM must contain a description of what's actually happening just because it makes excellent predictions about the result of experiments. I think the easiest way to explain what I mean is to consider one of the simplest probability theories.

Let ##X=\{1,2,3,4,5,6\}##. Let ##\Sigma## be the set of all subsets of ##X##. For each finite set ##S##, we will use the notation ##|S|## for the cardinality of ##S##, i.e. the number of distinct elements of ##S##. Define ##P:\Sigma\to[0,1]## by ##P(E)=|E|/|X|## for all ##E\subseteq X##.

So far we have only talked about the purely mathematical part of the theory. We turn this into a theory of applied mathematics, by adding a correspondence rule, i.e. an assumption about how something in the mathematics corresponds to something in the real world: We assume that for each ##E\subseteq X##, ##P(E)## is the frequency with which an ordinary six-sided die will have a result in the set ##E##, in a long sequence of identical experiments.

Now we have a falsifiable theory, and it makes excellent predictions about the results of experiments. But it tells us nothing about what's happening to the die, and no one in their right mind would think that they can find out what's happening to it only by reinterpreting this theory.

There's no doubt that QM is a far better theory than this one, but it's still a (generalized) probability theory. So why would anyone think that it contains all the information about what's actually going on? I see no reason to think that it does.

I cringe when I see statements like "quantum mechanics is an embarrassment". No, it's not. It's an amazing achievement in science and mathematics, with many awesome applications in technology. Carroll's statement about how it's an embarrassment is however an embarrassment.
 
  • Like
Likes PeterDonis, Demystifier and Nugatory
  • #6
bhobba said:
I have had a look at these proofs of the Born rule in MW before. Wallace gives a lot of detail in his book on it which I forked out for (tried to find it the other day but couldn't - it's in the land of the lost right now :cry::cry::cry::cry:):
http://users.ox.ac.uk/~mert0130/books-emergent.shtml

If I understand correctly, in Wallace's version, the worlds can interfere, causing lost books.

bhobba said:
Anyway when I read it I did carefully go through his proof and its possible objections. For me the bottom line was the waging strategy he used amounted to basis independence which is in fact the key to Gleason's theorem. If the measure doesn't depend on the basis it belongs to (ie non contextuality) then Gleason applies and you get Born.

But perhaps it's natural in the context of MWI, compared to Copenhagen? In Copenhagen, noncontextuality seems poorly motivated to me, since different measurements should give different results. Or can contextuality also be imported into MWI, since decoherence picks a preferred basis?
 
Last edited by a moderator:
  • #7
Fredrik said:
I'm not a fan of these "derivations". I think they're based on some pretty major misconceptions. First of all, they all use the axiom that the Hilbert space of a composite system is the tensor product of the Hilbert spaces of the subsystems. How did that axiom find its way into the foundations of QM in the first place? The answer is that it ensures that probabilities assigned by the Born rule follow this standard rule of probability theory: P(A & B)=P(A)P(B) when A and B are independent.

I don't quite see how the Born rule implies the tensor product structure.

Even then, it doesn't mean that the tensor product structure implies the Born rule.
 
  • #8
atyy said:
I don't quite see how the Born rule implies the tensor product structure.
I don't claim to be able to prove this myself. It looks very difficult actually. Aerts and Daubechies use a pretty sophisticated argument based on the quantum logic approach to QM to derive the tensor product stuff. I'm just saying that when I read parts of their paper years ago, it seemed to me that the entire approach could and should be justified using the Born rule. Unfortunately I don't think I can take the time to try to prove that I'm right in this the thread.

The part of my post that you're quoting is however not about this very difficult topic. It's about something much easier: I'm saying that the tensor product stuff plus the Born rule ensures that the probability rule "P(A & B)=P(A)P(B) when A and B are independent" holds. The argument for this is just a simple calculation like
$$P(a)P(b) =|\langle a|\psi\rangle|^2|\langle b|\phi\rangle|^2 =\left|\big(\langle a|\otimes\langle b|\big)\big(|\psi\rangle\otimes|\phi\rangle\big)\right|^2 =P(a\, \&\, b).$$ (I'm not sure what notational convention for tensor products of bras is the most popular. Maybe <b|should be to the left of <a| above). It seems very likely that this observation was the original motivation for the inclusion of the tensor product stuff in the axioms of QM. The fact that the Born rule is part of the motivation makes derivations of the Born rule that rely on tensor products pretty suspicious. So the advocates of these derivations would probably like to argue that the tensor product stuff is forced upon us by things that have nothing to do with the Born rule. Aerts & Daubechies appear to have made such an argument, but it seems to me that their proof relies on assumptions that can and should be justified by the Born rule. Again, I don't think I can take the time to try to prove that here.

atyy said:
Even then, it doesn't mean that the tensor product structure implies the Born rule.
Right. But these derivations are all about how the Born rule follows from the rest of QM (which includes the tensor product stuff), and they rely heavily on the tensor product stuff.
 
  • #9
Fredrik said:
I cringe when I see statements like "quantum mechanics is an embarrassment". No, it's not. It's an amazing achievement in science and mathematics, with many awesome applications in technology. Carroll's statement about how it's an embarrassment is however an embarrassment.

I don't think you have to choose--quantum mechanics can be BOTH an amazing achievement in science and mathematics, and an embarrassment.

I think it's an embarrassment in the sense that we can't really come up with a completely consistent way of understanding what QM is saying about the world.

You can take the operational approach, and just say that it's a calculational tool for predicting probabilities of outcomes of measurements. And that's fine for most purposes, but presumably, a measurement is a particular kind of physical interaction, so what is special about measurement?

It seems to me that either you say there is something special about measurement, that measurements have definite outcomes, while other sorts of interactions only have probability amplitudes, or else you say that there is nothing special about measurements, that they don't have definite outcomes, either, or you say that everything has definite outcomes. Either measurement is different, which is weird, or it's not different, in which case it's not clear what the probabilities in QM are probabilities OF. If everything happens, then what does it mean to say that some things happen with higher probability?

I think it's a mess, conceptually. And it's kind of embarrassing that it's not much clearer 80 years (or however long it was) after it was developed.
 
  • #10
Fredrik said:
It seems very likely that this observation was the original motivation for the inclusion of the tensor product stuff in the axioms of QM.

I don't see that. Tensor products are a natural way to make a composite vector space from two component vector spaces. Maybe the motivation for using a vector space might be indirectly motivated by their probabilistic interpretation, but it's more general than that.
 
  • #11
Fredrik said:
The part of my post that you're quoting is however not about this very difficult topic. It's about something much easier: I'm saying that the tensor product stuff plus the Born rule ensures that the probability rule "P(A & B)=P(A)P(B) when A and B are independent" holds. The argument for this is just a simple calculation like
$$P(a)P(b) =|\langle a|\psi\rangle|^2|\langle b|\phi\rangle|^2 =\left|\big(\langle a|\otimes\langle b|\big)\big(|\psi\rangle\otimes|\phi\rangle\big)\right|^2 =P(a\, \&\, b).$$

But doesn't the definition of A and B being independent hold regardless of the tensor product structure and the Born rule? The equations you wrote seem to say something else: the Born rule and the tensor product structure mean that product states are independent.

Could a rule that is contextual lead to observables that factor being correlated on a product state?
 
  • #12
atyy said:
But perhaps it's natural in the context of MWI, compared to Copenhagen? In Copenhagen, noncontextuality seems poorly motivated to me, since different measurements should give different results. Or can contextuality also be imported into MWI, since decoherence picks a preferred basis?

In MW what Wallice does is very natural. We know we will only experience one world - but which one? A betting strategy based on decision theory seems quite reasonable to determine it. But maybe that's just me and my applied math background - decision theory is rather an important area.

In Wallices book he has all sorts of theorems proving this and that about such strategies, but for my money, and if I recall correctly, he even states it explicitly in one of his theorems saying its equivalent to basis independence, its merely repackaged Gleason.

I love the elegance of MW - really its beauty incarnate - but for me unconvincing. This exponential dilution of the wave-function at a massive rate I simply can't swallow.

Thanks
Bill
 
Last edited:
  • Like
Likes Derek P
  • #13
bhobba said:
I love the elegance of MW - really its beauty incarnate - but for me unconvincing. This exponential dilution of the wave-function at a massive rate I simply can't swallow.

I don't understand what's hard to swallow about that. That's sort of normal for probability distributions, isn't it? They might start off highly peaked at one possibility, but with time, they spread out in all directions, becoming more and more "diluted". Why does becoming exponentially diluted seem implausible?
 
  • Like
Likes Derek P
  • #14
stevendaryl said:
You can take the operational approach, and just say that it's a calculational tool for predicting probabilities of outcomes of measurements. And that's fine for most purposes, but presumably, a measurement is a particular kind of physical interaction, so what is special about measurement?
I think it's fine for all purposes. It would be nice if QM had something more to tell us, but that's not a reason to think that it does.

The only thing that's special about measurements is that they are the interactions that we use to test the accuracy of the theory's predictions.

stevendaryl said:
It seems to me that either you say there is something special about measurement, that measurements have definite outcomes, while other sorts of interactions only have probability amplitudes,
To me it seems sufficient that the final state of the measuring device is for practical purposes indistinguishable from a "definite" state.

stevendaryl said:
or else you say that there is nothing special about measurements,
I do. I mean, I think they follow the same laws, because I find the alternative extremely absurd. But "measurements" is still a proper subset of "interactions", so in a very restricted sense, they're "special".
 
Last edited:
  • #15
atyy said:
But doesn't the definition of A and B being independent hold regardless of the tensor product structure and the Born rule?
Independence is the result P(a & b)=P(a)P(b). This result follows from the tensor product stuff. Perhaps it also follows from something else. This is why this argument doesn't prove that we need to use tensor products, and that's why the Aerts & Daubechies argument is so appealing. It does prove that (given some assumption that are rather difficult to understand), we have to use tensor products.

atyy said:
The equations you wrote seem to say something else: the Born rule and the tensor product structure mean that product states are independent.
Sure, but doesn't "the tensor product stuff" include the assumption that if |S> is the state of system A, |T> is the state of system B, and A and B haven't yet interacted in any way, then the state of the composite system is ##|S\rangle\otimes|T\rangle##? Aren't we saying essentially the same thing?

atyy said:
Could a rule that is contextual lead to observables that factor being correlated on a product state?
I haven't really thought about that.
 
  • #16
Fredrik said:
The only thing that's special about measurements is that they are the interactions that we use to test the accuracy of the theory's predictions.

If there is nothing special about measurements, and surely there isn't, then it's hard to understand how measurements have definite outcomes, when other sorts of interactions don't, according to QM. You could say that measurements don't, either, but that leads to Many Worlds, and it's kind of understand how probabilities work if everything happens.

To me it seems sufficient that the final state of the measuring device is for practical purposes indistinguishable from a "definite" state.

We can pretend, for the purpose of getting on with doing science, but the theory itself shows that it's just pretending. Either the outcome is definite, or it's not. The one choice leads to weird wave function collapse, the other choice leads to weird Many Worlds. Conceptually, it seems a mess to me.
 
  • #17
stevendaryl said:
I don't understand what's hard to swallow about that. That's sort of normal for probability distributions, isn't it? They might start off highly peaked at one possibility, but with time, they spread out in all directions, becoming more and more "diluted". Why does becoming exponentially diluted seem implausible?

Yea - but in MW a wavefunction is not like probabilities which is an abstract thing - its considered very real indeed. That's why its dilution seems very implausible to me - real things from everyday experience can't be infinitely diluted - which is basically what is required.

Of course that doesn't disprove it - I just find it hard to take seriously.

Thanks
Bill
 
  • #18
bhobba said:
Yea - but in MW a wavefunction is not like probabilities which is an abstract thing - its considered very real indeed. That's why its dilution seems very implausible to me - real things from everyday experience can't be infinitely diluted - which is basically what is required.

Of course that doesn't disprove it - I just find it hard to take seriously.

Thanks
Bill

This confused me and made me want to understand this argument better. What is it about the dilution that seems implausible?

Could you give an example of something similar?
 
  • #19
Quantumental said:
This confused me and made me want to understand this argument better. What is it about the dilution that seems implausible? Could you give an example of something similar?

Look around you. Take anything that's considered to exist in a real sense - using the common-sense view of real. You cannot continually subdivide it forever. That a wavefunction is considered real and isn't like that just seems implausible to me. It doesn't prove anything - its simply an opinion - opinions are like bums - everyone has one - it doesn't make it correct. But its my view.

As an example take a glass water. Keep dividing it and you stop at subatomic particles - you can go no further - and it still be considered water. Yet a wavefunction, that is considered just as real, can keep on doing that indefinitely - well simply strikes me as implausible.

Thanks
Bill
 
Last edited:
  • #20
Fredrik said:
Sure, but doesn't "the tensor product stuff" include the assumption that if |S> is the state of system A, |T> is the state of system B, and A and B haven't yet interacted in any way, then the state of the composite system is ##|S\rangle\otimes|T\rangle##? Aren't we saying essentially the same thing?

Yes, that makes sense with this additional statement about systems that have not interacted.

Now I'm wondering whether the statement about systems not having interacted has to be added by hand, or whether it can be "derived" from the idea that composite systems have tensor product Hilbert spaces.

If by "non-interacting subsystems" one means that the subsystems are independent, then I think it can be derived, if one assumes the Born rule for each non-interacting subsystem. In which case, the tensor product structure seems to be a consequence of the Born rule, as you said.

But if by "non-interacting" one means that the Hamiltonian of the composite system has no product terms, then I don't immediately see how to exclude the possibility that the non-interacting subsystems should not be independent. Which I guess is a way of saying that the tensor product structure does not imply the born rule.
 
Last edited:
  • Like
Likes Derek P
  • #21
bhobba said:
In MW what Wallice does is very natural. We know we will only experience one world - but which one? A betting strategy based on decision theory seems quite reasonable to determine it. But maybe that's just me and my applied math background - decision theory is rather an important area.

In Wallices book he has all sorts of theorems proving this and that about such strategies, but for my money, and if I recall correctly, he even states it explicitly in one of his theorems saying its equivalent to basis independence, its merely repackaged Gleason.

I love the elegance of MW - really its beauty incarnate - but for me unconvincing. This exponential dilution of the wave-function at a massive rate I simply can't swallow.

In http://arxiv.org/abs/0906.2718 he says that although there is a decision-theoretic mathematical proof, that isn't enough to make MWI work.

"A proof that rational agents in an Everett universe must act in accordance with the Born-rule probabilities falls short of a full solution of the probability problem: we might also ask how this decision-theoretic notion of probability connects with our use of probability in assessing the evidence for quantum mechanics, or with our ordinary, pretheoretic notions of probability as a guide to action in cases of uncertainty. I shall address neither question here, though (for discussions of the former question, see Greaves and Myrvold’s contribution to this volume, Wallace (2006), and part II of Wallace (2010); for the latter, see Wallace (2005), part III of Wallace (2010), Saunders (1998), and Saunders’ contribution to this volume.)"

Do you think any of the discussions of the remaining problems he references are conclusive?

bhobba said:
Look around you. Take anything that's considered to exist in a real sense - using the common-sense view of real. You cannot continually subdivide it forever. That a wavefunction is considered real and isn't like that just seems implausible to me. It doesn't prove anything - its simply an opinion - opinions are like bums - everyone has one - it doesn't make it correct. But its my view.

As an example take a glass water. Keep dividing it and you stop at subatomic particles - you can go no further - and it still be considered water. Yet a wavefunction, that is considered just as real, can keep on doing that indefinitely - well simply strikes me as implausible.

Is it implausible even when one considers that Hilbert space is exponentially large? Say if one subsystem is d dimensional, then the Hilbert space of N identical subsystems is dN. A typical illustration of how large Hilbert space is is "Ignoring causality and the lack of materials, even if we filled up our entire Hubble volume, the whole visible universe, with our best classical storage device, we could only store the quantum state of a few hundred spins using this huge classical memory." http://fqxi.org/community/forum/topic/1559
 
  • #22
atyy said:
In http://arxiv.org/abs/0906.2718 he says that although there is a decision-theoretic mathematical proof, that isn't enough to make MWI work.

Plausibility isn't something objective - its not something that's really provable - its either plausible to you - or isn't. Given the situation in MW I find the idea of using decision theory to figure out some kind of utility to help in deciding the likelihood of experiencing a certain world quite reasonable. Its actually similar to the Bayesian view of probabilities.

MW is a deterministic theory but doesn't tell what world you will experience. All you can use is decision theory. Its exactly the same as its used in applications of decision theory. We have some situation that is likely deterministic, but the model doesn't tell us the outcome - that is the exact situation decision theory was designed to help with.

atyy said:
In http://arxiv.org/abs/0906.2718Is it implausible even when one considers that Hilbert space is exponentially large? Say if one subsystem is d dimensional, then the Hilbert space of N identical subsystems is dN. A typical illustration of how large Hilbert space is is "Ignoring causality and the lack of materials, even if we filled up our entire Hubble volume, the whole visible universe, with our best classical storage device, we could only store the quantum state of a few hundred spins using this huge classical memory." http://fqxi.org/community/forum/topic/1559

To me that just makes it more implausible.

Thanks
Bill
 
Last edited:
  • Like
Likes Derek P
  • #23
stevendaryl said:
You could say that measurements don't, either, but that leads to Many Worlds, and it's kind of understand how probabilities work if everything happens.
[...]
The one choice leads to weird wave function collapse, the other choice leads to weird Many Worlds. Conceptually, it seems a mess to me.
I don't think QM implies anything like this. What leads to "many worlds or collapse" isn't QM, but QM together with the unjustified assumptions that people add on top of it. The main problematic assumption is the identification of the system with the state vector. States should be viewed as probability measures, not as systems, or things that represent all the properties of a system. In my simple example of a classical probability theory, you wouldn't identify the function P with the system, or think that it represents all the properties of the system, but this is the sort of thing people do with QM all the time, and they often don't even understand that they're making an assumption.
 
  • Like
Likes Boing3000
  • #24
Fredrik said:
I don't think QM implies anything like this. What leads to "many worlds or collapse" isn't QM, but QM together with the unjustified assumptions that people add on top of it

Indeed.

QM's formalism does not imply collapse. It simply says the possible outcomes of an observation are the eigenvalues of the observable, and the Born rule gives the probability of a particular outcome.

Thanks
Bill
 
  • #25
Fredrik said:
I don't think QM implies anything like this. What leads to "many worlds or collapse" isn't QM, but QM together with the unjustified assumptions that people add on top of it.

I don't agree. I think what you're saying is exactly backwards. You have to add something to QM (something special about measurement that collapses the wave function) to AVOID many-worlds.

Look at a question about an ordinary electron: Between measurements, does it have spin-up in the z-direction, or spin-down? Either it has a definite answer, or it doesn't. Bell's inequality seems to imply that in certain circumstances, it just doesn't have a definite answer (between measurements, anyway).

Now you ask the same question about a measurement result. Did Alice measure spin-up, or spin-down? Decoherence basically shows us that we can pretend, for all intents and purposes, that it has a definite answer. But if there is nothing special about measurement, then measurement can't be any more definite than an electron's spin. Not at a fundamental level.
 
  • #26
bhobba said:
QM's formalism does not imply collapse. It simply says the possible outcomes of an observation are the eigenvalues of the observable, and the Born rule gives the probability of a particular outcome.

My point is that either making an observation is a special kind of physical interaction, or it isn't. If it's special, then what's special about it? If it isn't special, then what does this "the possible outcomes are the eigenvalues of the observable" mean? What is there inherent in a situation that makes it a measurement of an observable? What makes something an outcome?

For practical purposes, something is a measurement of an observable if afterward there is a record that the observable had a particular value. But why should a record have definite values, when things like electron spins don't? Decoherence is a good answer to why we don't see any evidence of indeterminacy (lack of definite values) in macroscopic objects, but it doesn't actually get rid of the indeterminacy. To me, decoherence is many-worlds, together with an explanation as to why the different worlds don't seem to interfere.
 
  • #27
Fredrik said:
The main problematic assumption is the identification of the system with the state vector. States should be viewed as probability measures, not as systems, or things that represent all the properties of a system. In my simple example of a classical probability theory, you wouldn't identify the function P with the system, or think that it represents all the properties of the system, but this is the sort of thing people do with QM all the time, and they often don't even understand that they're making an assumption.

There's a good reason that people don't do it in classical probability theory, but do it in quantum theory. In classical probability theory as applied to a real-world problem, the assumption is that the system being studied has some definite, though unknown, properties. We use a probability distribution to quantify our uncertainty about those unnknown properties.

The same approach doesn't make sense in QM, because results such as Bell's theorem imply that we CAN'T assume that the objects under study have definite but unknown properties. (Not without nonlocal interactions, anyway). So the distinction between the system, and the state, which only quantifies our ignorance about the system, doesn't make a lot of sense in QM.
 
  • #28
bhobba said:
Yea - but in MW a wavefunction is not like probabilities which is an abstract thing - its considered very real indeed. That's why its dilution seems very implausible to me - real things from everyday experience can't be infinitely diluted - which is basically what is required
That's pretty much my feling as well. Considering the state vector as something "real" and avoiding the collapse as an unphysical trick to hide the ignorance regarding the difference between "time evolution" and "collapse" leads to MWs in a quite natural way. But it is hard to grasp how a "real" object "dilutes into the infinite dimensional Hilbert space", even if this is consistent mathematically.

What I never understood is how energy conservation works in this picture. From the formalism it follows that energy is conserved in the entire Hilbert space (so energy "dilutes into the infinite dimensional Hilbert space"), but from our perspective restricted to one branch (!) it seems that energy is conserved for the branch which contains "me".
 
  • #29
stevendaryl said:
There's a good reason that people don't do it in classical probability theory, but do it in quantum theory. In classical probability theory as applied to a real-world problem, the assumption is that the system being studied has some definite, though unknown, properties. We use a probability distribution to quantify our uncertainty about those unnknown properties.

The same approach doesn't make sense in QM, because results such as Bell's theorem imply that we CAN'T assume that the objects under study have definite but unknown properties. (Not without nonlocal interactions, anyway). So the distinction between the system, and the state, which only quantifies our ignorance about the system, doesn't make a lot of sense in QM.

It's precisely this argument, which I never understood and that's why I'm a follower of the minimal statistical (ensemble) interpretation. The only difference between a classical deterministic model of the physical world, where we use probabilities to describe our incomplete notion about the state of the deterministic object under consideration and the quantum-theoretical probabilities is that a quantum system is simply not deterministic, i.e., we cannot have complete knowledge about the values of all possible observables, because there simply states that determine all these values do not exist. The probabilities are used in both cases to describe random outcomes of indetermined observables. In the classical case, the indetermined observables in fact are determined, but we do not know there values, because of incomplete information. In the quantum case not all observables can be determined on one quantum system, and that's why we cannot have knowledge about all observables and thus use a probabilistic description.

So why do you accept probabilistic description because of incomplete knowledge you could principally have in the classical case without further ado, while in the quantum case, where it is simply impossible to have complete knowledge about the values of all observables, because for a quantum system not all observables can be determined in principle?

Nevertheless, I also cannot follow the qbists, who turn this argument to the extreme and declaring all knowledge about quantum system subjective. The probabilities, predicted by quantum theory, are as objective as the probabilities used in classical statistical mechanics. They can be verified or falsified by doing experiments and statistical evaluations of their outcome. For that purpose you need to independently prepare the system under consideration in a given state (leading at the same time to the instrumental definition of a quantum state as an equivalnce class of preparation procedures) and then make any measurements you like and analyze the outcomes statistically. So far, quantum theory was verified with excellent accuracy to give the correct probabilistic predictions about the outcomes of such experiments.

So, why should one add unnecessary (and often confusing) ideas like "many worlds" or "Bohmian trajectories" on top of an excellently working theory, namely quantum theory in its ensemble representation?
 
  • #30
vanhees71 said:
So why do you accept probabilistic description because of incomplete knowledge you could principally have in the classical case without further ado, while in the quantum case, where it is simply impossible to have complete knowledge about the values of all observables, because for a quantum system not all observables can be determined in principle?

The question is: Your probabilities are probabilities of WHAT? It's not so much whether things are deterministic, or not. You can add randomness to classical physics, by just introducing a nondeterministic process--some idealized coin flip, or radioactive decay, or whatever, such that the process has two (or more) distinguishable outcomes, and such that the complete physical description of the system before the process is consistent with either outcome. But the point is that in classical probability, probabilities describe properties that eventually become definite. Before flipping the idealized coin, the outcome might be indefinite, but afterward, the outcome is definite. The probabilities reflect our lack of information about some property that has a definite value (or will, at some point in the future). Basically, there is a statement: "The coin flip will result in heads-up." that at some point will be either true or false, but we don't now know which.

The difference with QM is that statements such as "The electron has spin-up in direction x" not only have an unknown truth value, but they don't have a truth value. They can't have a truth value--that would be a hidden variable, which Bell's theorem rules out (again, if we disallow nonlocal interactions). I don't see how probabilities in the QM can reflect ignorance of a truth value if the truth value doesn't exist. If you ask "What color is the real number pi?" there is no answer--pi doesn't have a color. It doesn't make sense to say that it has a 20% probability of being red.

That's my complaint about probabilities in QM. It doesn't make sense to say that probabilities reflect ignorance about system properties if our theory tells us that the system just doesn't HAVE those properties.

I know that after a measurement, it seems to be the case that "The electron was measured to have spin-up in direction x" is either true or false. So it seems that the statement has a definite truth value, afterward, so we can apply probabilities in the same way we do classically, to reflect our ignorance about the truth value of a statement that has (or will have) a definite truth value. But that's where the issue of whether there is something special about measurement comes in. If "The electron has spin-up in the x-direction" has no truth value before the measurement, and there is nothing special about measurements, then why should "The electron was measured to have spin-up in the x-direction" have a definite truth value?
 
  • #31
Lubos wrote up an analysis of Carroll's paper that I'd say should factor into this discussion.

For example:
An example of Carroll-Sebens circular reasonining is that they assume that small off-diagonal entries of a density matrix may be neglected – they assume it before they derive or admit that the small entries correspond to probabilities. That's, of course, illegitimate. If you want to replace a small quantity by zero, and to be able to see whether the replacement is really justified, you have to know what the quantity actually is. Moreover, these things are only negligible if classical physics becomes OK, so whatever you do with this approximation is clearly saying nothing whatsoever about the intrinsic, truly quantum, properties of quantum mechanics in the quantum regime!

...

The other obvious problem with the ESP quote above is that it says what the "credence" is independent of. But a usable theory should actually say what it does depend upon. Ideally, one should have a formula. If one has a formula, one immediately sees what it depends upon and what it doesn't depend upon. A person who actually has a theory would never try to make these unnecessarily weak statements that something does not depend on something else. Isn't it far more sensible and satisfactory to say what the quantity does depend upon – and what it's really equal to? Quantum mechanics answers all these questions very explicitly, Carroll and Sebens don't.
 
  • #32
atyy said:
But doesn't the definition of A and B being independent hold regardless of the tensor product structure and the Born rule?

Fredrik said:
Independence is the result P(a & b)=P(a)P(b). This result follows from the tensor product stuff. Perhaps it also follows from something else. This is why this argument doesn't prove that we need to use tensor products, and that's why the Aerts & Daubechies argument is so appealing. It does prove that (given some assumption that are rather difficult to understand), we have to use tensor products.

Judging from your quotes it seems you think there is a way to do probability without bilinear forms? But any probability P(a) is a bilinear form, e.g.:

https://www.physicsforums.com/attachment.php?attachmentid=71661&stc=1&d=1406376963
(Parthasarathry - Quantum Stochastic Calculus P. 1)

and because of this bilinear form-ness, P(a) = g(a,a) = <a,ψ(a)> immediately implies an isomorphism to a tensor product

https://www.physicsforums.com/attachment.php?attachmentid=71662&stc=1&d=1406377319
(Lang Linear Algebra 2nd Edition appendix)

so it looks to me like you're just unavoidably using tensor product structure no matter what you do, i.e. if you do something else or ignore the tensor product structure, it's there anyway (no matter what symbols you use), no?

I could be wrong, but it looks to me like you just can't use probabilities without using bilinear forms and tensor products. Once that is established, we add more structure when exhibiting something like
P(a)P(b) = g(a,a)h(b,b) = <a,ψ(a)><b,φ(b)> = ... = P(a&b)
it's going to have to use more tensor products to give some meaning to whatever the & is supposed to mean (in the case of independence you exploit Pythagoras, but this special case lives in a structure built into bilinear forms which are an unavoidable consequence of introducing the very notion of probability).

If you admit the notion of probability exists, Parthasarathry's little calculation (or it in the limit) makes it completely obvious the Born rule will exist (if you've decided to base your mechanics on the notion of a state vector, which just falls out of the notion of probability...), and a similar point is made by Lubos:

Again, we need to know that mutually exclusive states are orthogonal and the probability has something to do with the length of a state vector (or its projection to a subspace).

That's everything we need to assume if we want to prove Born's rule.

...

That's the real reason why Born's rule works. The probabilities and mutual exclusiveness has to be expressed as a mathematical function or property of state vectors and the totally general rules for probabilities (like the additive behavior of probabilities under "or") heavily constrain what the map between the "human language" (probability, mutual exclusiveness) and the "mathematical properties" can be. The solution to these constraints is basically unique. The probabilities have to be given by the second powers of the moduli of the complex probability amplitudes. It's because only such "quadratic" formulae for the probabilities obey the general additive rules, thanks to the Pythagorean theorem.
 
  • #33
bolbteppa said:
Lubos wrote up an analysis of Carroll's paper that I'd say should factor into this discussion.
An example of Carroll-Sebens circular reasonining is that they assume that small off-diagonal entries of a density matrix may be neglected – they assume it before they derive or admit that the small entries correspond to probabilities. That's, of course, illegitimate. If you want to replace a small quantity by zero, and to be able to see whether the replacement is really justified, you have to know what the quantity actually is. Moreover, these things are only negligible if classical physics becomes OK, so whatever you do with this approximation is clearly saying nothing whatsoever about the intrinsic, truly quantum, properties of quantum mechanics in the quantum regime!​
For example:

That's the complaint I've always had about decoherence. Basically, decoherence can be used to argue that we can effectively treat macroscopic superpositions as proper mixtures because the interference terms are negligible. But "negligible" is relative to an interpretation of amplitudes as probabilities (when squared).

I wouldn't call such arguments completely circular--maybe they're "semi-circular". What they show is that the interpretation of amplitudes via the Born rule is robustly self-consistent. There are lots of different ways to think about making sense of probabilities in QM, and they all boil down to something that is consistent with the Born rule. More than that, there doesn't seem to be any alternative to the Born rule that is at all plausible. But logically speaking, I don't think the Born rule can be proved starting with a theory that doesn't have it.
 
  • #34
bhobba said:
Plausibility isn't something objective - its not something that's really provable - its either plausible to you - or isn't. Given the situation in MW I find the idea of using decision theory to figure out some kind of utility to help in deciding the likelihood of experiencing a certain world quite reasonable. Its actually similar to the Bayesian view of probabilities.

The quote from Wallace in post #21 suggests that the remaining issue is about something objective, and not plausibility.
 
  • #35
stevendaryl said:
Look at a question about an ordinary electron: Between measurements, does it have spin-up in the z-direction, or spin-down? Either it has a definite answer, or it doesn't. Bell's inequality seems to imply that in certain circumstances, it just doesn't have a definite answer (between measurements, anyway).

Now you ask the same question about a measurement result. Did Alice measure spin-up, or spin-down? Decoherence basically shows us that we can pretend, for all intents and purposes, that it has a definite answer. But if there is nothing special about measurement, then measurement can't be any more definite than an electron's spin. Not at a fundamental level.
I have no objections against this.

stevendaryl said:
You have to add something to QM (something special about measurement that collapses the wave function) to AVOID many-worlds.
This is where we disagree. It's clear what the source of the disagreement is. What's QM to you is "QM plus an unnecessary assumption" to me. I agree that your version of QM is a many-worlds theory.

However, I wouldn't say that we can avoid many worlds by adding a collapse axiom. I think that your version of QM plus a collapse axiom would just be inconsistent, but we can perhaps find a similar theory that makes similar predictions and contains a collapse mechanism. I would consider this an alternative to QM, not an interpretation of QM.

stevendaryl said:
The same approach doesn't make sense in QM, because results such as Bell's theorem imply that we CAN'T assume that the objects under study have definite but unknown properties. (Not without nonlocal interactions, anyway).
I think you're giving Bell's theorem too much credit here. The familiar spin-1/2 inequality tells us that a theory (different from QM) in which the state fully determines the outcome of each measurement with a Stern-Gerlach device, can't make the same predictions as QM. Since experiments agree with QM, this means that all such theories have been falsified.

This is a cool result, but to argue that it forces us to identify systems with pure states, you have to jump to the conclusion that this also rules out that the reason why QM works is that systems are doing things that aren't described in detail by QM. I really don't think Bell's theorem is strong enough to do anything like that. It rules out a class of ontological models for QM, but it doesn't say a whole lot about whether an even better theory could exist.
 
Last edited:

Similar threads

  • Quantum Interpretations and Foundations
Replies
34
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
2K
  • Quantum Interpretations and Foundations
Replies
8
Views
2K
  • Quantum Interpretations and Foundations
Replies
1
Views
430
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
11
Views
756
  • Quantum Interpretations and Foundations
4
Replies
120
Views
9K
  • Quantum Interpretations and Foundations
5
Replies
174
Views
9K
Back
Top