Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Are World Counts Incoherent?

  1. Nov 24, 2005 #1
    This discussion is moved from the thread "my paper on the Born rule" (post #40). The key claim is:

    Wallace and Greaves and many others seem to accept the claim that if there are naturally distinguishable branches/worlds in the Everett approach, then it is natural to assign probabilities proportional to world counts, producing a difficult conflict with the Born rule. They claim, however, that world counting is incoherent. Page 21 of Wallace's paper cited above gives the most elaboration I've seen defending this view.

    How correct is their claim? Are world counts incoherent in all contexts, or only in some? In particular, are they coherent in this situation of most interest to me: counting arguments suggest that relative to the parent world where we started testing the Born rule, our world is very unusual, having seen measurement statistics close to the Born rule, while the vast majority of worlds should instead see near uniform measurement statistics.

    In posts to follow, I'll quote Wallace's argument, and offer my own opinions.
    Last edited: Nov 24, 2005
  2. jcsd
  3. Nov 24, 2005 #2
    As promised, here is the longest discussion I've seen on this issue. In "Quantum Probability from Subjective Likelihood:
    improving on Deutsch’s proof of the probability
    ", David Wallace writes:

    Last edited: Nov 24, 2005
  4. Nov 24, 2005 #3
    Analogy with Entropy

    Informally, entropy is often defined in terms of "the number of states of a system consistent with macroscopic knowledge of that system." Under such a definition, the exact number of states is a sense very sensitive to the details of the exact model one uses, including the exact system boundary. It is also very sensitive to how big a difference is enough to call something a different state. Systems with an apparent infinity of degrees of freedom are particularly troublesome. Tiny changes can easily change the number of states by a factor of 10^10 or more.

    Nevertheless, the concept of entropy is very useful, allowing robust predictions of system behavior. When the number of states is of the order of 10^10^25, a factor 10^10 makes little difference. We have standard and apparently reasonable ways to deal with ambiguities. It is hard to escape the impression that real systems really do have entropy.

    The driving concept of a state is a difference that makes enough of a difference in long-term system evolution. So the standard quantum entropy definition just takes the projection on some basis, ignoring the fact that quantum states can also vary in relative phases. Apparently the relative phases are not differences that make enough difference regarding long term system evolution.

    The driving concept of a world is a component of a quantum state whose future evolution is independent enough from the evolution of other states. While there might be ambiguities in classifying borderline cases, it is not clear to me that these ambiguities are so severe as to make the concept of world count "incoherent" or "meaningless."

    In particular, it is not clear that those ambiguities can go very far toward reducing the apparent strangness of our seeing Born rule frequencies in measurements in our world, when in simple models the typical world sees near uniform frequencies.
  5. Nov 24, 2005 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Ah, I saw that a moderator moved this thread.

    I agree with you that "world counting" is always a possibility, at least grossly, with the idea that if we make it precise, with one or the orther arbitrary postulate, the numbers will not varying a lot. However, I have more difficulties a priori, with assigning equal probabilities to each of the worlds. In thermodynamics, there's a good reason to do so, and that is ergodicity: the state is in any case rattling back and forth eratically in phase space, so any "initial condition" is about equivalent and will soon cover the entire phase space. But in "world counting" there's no such thing as ergodicity! We're not hopping randomly (at least, that's how I understand it) through all the worlds, so that we get an "average impression" of all the worlds, in which case it could be justified to say that they all have an equal probability. We just "happen to be" in one of the worlds ; but how is this "happen to be" distribution done ? Uniformly ?

    As a silly illustration of what I'm trying to say: imagine that there are 10^10 ants on the planet and 5 humans (and let us take it for the sake of argument that ants are conscious beings). You happen to be one of the humans, and you could find that rather strange, because there is a large probability for you to be an ant. There are many more "ant worlds" than "human worlds". Does such an assignment of probability even make sense ? (I do realise the sillyness of my proposal :smile: )
  6. Nov 24, 2005 #5
    Well, I think that I would actually agree with Wallace's statement that "Realistic models of macroscopic systems are invariably infinite-dimensional." However, I do not agree with Wallace that this implies that world-counting is necessarily incoherent.

    In trying to think about how to explain my point of view, I find myself returning to my high school calculus days, when I first encountered the concept of the limit. Basically, the limit is a conceptual bridge from "discrete" to "continuous." If we want to define the area under a curve, for example, then we start out approximating the area as being composed of discrete chunks, and we add them up. Then we take the limit as the number of chunks approaches infinity. Of course, for this scheme to work, we require that the area calculation remain stable as we take our limit.

    So I have been borrowing some of these concepts from my high school days in considering how to make the APP (outcome counting) into a coherent scheme. If we assume outcome counting, and we also assume that Wallace is correct to assert that the number of branches is formally infinite, then the way to reconcile these two facts would go something like this:

    1) There exists some sort of meaningful method of approximating the infinite number of worlds associated with a measurement via a finite representative "sampling" of these worlds.

    2) We apply outcome counting to the finite worlds generated via the above approximation method, and use this to give us probabilities.

    3) There exists some method of calculating probabilities in the limit as the number of sampled worlds becomes infinite (ie, as our model becomes more fine-grained).

    4) We require that the calculated probabilities must remain stable as we take this limit.

    Wallace's argument, it seems to me, is simply an assertion that this cannot be done. But there are many examples of situations where such limits CAN be taken, and DO remain stable. Your (Robin's) discussion of entropy is one such situation. I argued earlier in Patrick's thread that the FPI presents another such situation. So the interesting question, to me: how, exactly, do we go about the above procedure, in the attempt to make the APP into a coherent scheme?

    Alternatively, we may wish to discuss whether Wallace is correct to assert that the number of worlds is formally infinite. I think your paper, Robin, assumes that world number remains finite. What is your argument for this? Are you essentially trying to avoid the incoherency arguments posed by Wallace? Would your scheme perhaps be made to work even if we assume an infinite number of worlds?

  7. Nov 25, 2005 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    My fear is that your requirement of the stability of probabilities as you increase the number of worlds is nothing else than the non-contextuality requirement. Then you will hit Gleason's theorem: he showed that the only way of assigning probabilities in that case is the Hilbert norm.

    But I have difficulties with arguments that rely on the fact that the number of degrees of freedom is infinite. After all, we don't know if there is no discrete underlying structure, say at 10^(-5000) times the planck scale. That's still finite. Infinity in a theory is, to me, a mathematical approximation of "very large".
  8. Nov 25, 2005 #7
    The following is maybe not 100% on-topic here, but, I wouldn't accept this claim myself.

    I guess the intuition behind assigning uniform probabilities is that, if we don't have any better criteria, we might as well treat each world symmetrically. But for me this intuition is undermined by the fact that the number of worlds increases by some huge factor each second. If we treat each world one second from now as being equally important, then why not treat each world one second from now as being equally important as this world? But that would mean the part of the universe that is not in the far future is negligible. There would be more "world-moments" where it was the year one billion but we mistakenly thought it was 2005, than there would be "world-moments" where it actually was 2005. It would mean utilitarians should be going around advocating more world-splitting where people were happy, and less world-splitting where people were unhappy.

    So to put it in different terms... That each world has an unimaginably large number of descendant-worlds after even one second, suggests to me that the relevant variable is not world-count, but that we should imagine worlds as having a different "thickness". Earlier worlds should be much "thicker" if we want to avoid a lot of paradoxes. And once we've allowed that, there's no reason why we couldn't see some worlds as "thicker" than others that result from the same splitting.

    Of course, just because a decision rule has weird consequences doesn't mean we can ignore it, and it may be useful to work out how we should behave in a universe that really gets many times as "big" (in some decision-theoretical sense) each second. ("One man's modus tollens is another's modus ponens"...) And someone who was sufficiently convinced that equal probabilities are right, but thought the consequences were impossible, could just see this as refuting the Everett interpretation itself. But for me thinking about these things destroys the intuition that all worlds matter equally, or should be seen as equally probable. And the Everett interpretation itself has a lot going for it.

    So I don't agree that the program Deutsch started tries to solve the problem by introducing a new or different decision theory, because I think the old decision theory doesn't say anything clear about this situation.
  9. Nov 25, 2005 #8
    What exactly is the non-contextuality requirement? You've mentioned this many times, but I've never really understood it. :rolleyes:

    I'm happy with the position that we just don't know right now. I sorta lean toward infinity, but I'm not wedded to that position.

    Keep in mind that it is quite possible for the "underlying structure" to be discrete in some ways, continuous in others, at the same time. If spacetime is multiply connected, as in some models of quantum foam, then one could imagine that the *topological* degrees of freedom (like the genus of a surface, etc) are discrete and finite -- and yet spacetime is still fundamentally continuous.

  10. Nov 25, 2005 #9
    Hey Onto,

    So if a world of thickness T splits into two worlds with thicknesses T1 and T2, then we require: T = T1 + T2.

    I think that the term "world" should perhaps be given a meaning different than the term "branch," like this. Suppose we have an experiment with two possible outcomes. Everett says that we have one world that splits into two. Personally, I have always imagined that, in fact, we had "two worlds all along," so it would be more appropriate to say that we had one branch that split into two. The thickness of each branch would then be defined roughly as the number of worlds that are represented by that branch.

  11. Nov 25, 2005 #10
    I posed the topic of this thread as whether we agree with the claim that world counts are incoherent. So far none of three people who have commented in this thread have agreed with this claim. Before we go too far in the direction of changing the topic - does anyone out there agree with this main claim?
  12. Nov 25, 2005 #11
    OK, apparently there is a lot more interest her in the topic of how to assign probabilities, assuming that world counts are coherent. So let me make a few comments.

    1. While utilities are about what we want, probabilities are about what we know about the way the world is. If you lump them both together into "decision theory," you risk confusing the choice we have regarding what we care about, with a choice about how to assign probabilities. If the physics is clear, then I think there should be a right answer about how to assign probabilities. It may not be obvious at first what that right answer is, but it is out there nonetheless. Even if there is some freedom about how to assign a prior, we should get enough evidence in this sort of context to get a strong posterior.

    2. The kinds of uncertainty we are talking about here is "indexical." This is not uncertainty about the state of the universe, but rather uncertainty about which part of the universe we are. There is a philosphy literature on how to assign probabilities regarding indexical uncertainty. See Nick Bostrom's book "Anthropic Bias: Observation Selection Effects in Science and Philosophy"

    4. The issue is how to assign a prior, not a posterior. If we have reliable evidence that the year is 2005, then that will beat a strong prior that the year is one billion. If you don't think we have strong evidence, you get into simulation argument territory. It is not clear to me that a prior that favoring future moments is irrational.
  13. Nov 25, 2005 #12
    Perhaps the topic could be: assuming that world counting is coherent, then what is the best way to make this into a workable theory? afaik 2 of the 3 people in the world who have published bona fide attempts to do this are here in PF (I mean Robin and Weissman - with Graham being the one not here), and I am working on a fourth. Perhaps we could compare and contrast these various different strategies?

  14. Nov 25, 2005 #13
    All of the following is still about whether it's rational to base probabilities on world-counting, not about whether world-counting is incoherent. Maybe it belongs in the other thread, but since it's still a response to statements made here, I'm posting it here.

    Maybe it's a good idea to use Wallace's terminology here. Before a quantum experiment, an observer can take two different points of view:

    1. The pre-experiment observer is certain that he will become all post-experiment observers. Given the determinism of the theory there is nothing to be uncertain about; the observer only has to decide what utility weight to assign to each of his future selves. Wallace calls this the "Objective Determinism", or "OD" viewpoint.

    2. The pre-experiment observer is uncertain which of the post-experiment observers he will become. He quantifies this uncertainty by assigning probabilities. Wallace calls this the "Subjective Uncertainty", or "SU" viewpoint.

    I used to think that "OD" was obviously the right way to think about this. But Wallace uses an argument for the validity of "SU" taken from an earlier paper by Saunders. As I understand it, it goes as follows: The "person-stage" that is the pre-experiment observer is a part of many different "four-dimensional" persons. There is one person who observes first A and then B2, and another person who observes first A and then B1 (where A is everything the pre-experiment observer has experienced, and B1 and B2 are possible outcomes of the experiment). Before the experiment, each of these persons is indexically uncertain whether he is the person that will observe B1 or the person that will observe B2.

    If I'm reading your post correctly, you're saying that the pre-experiment observer's problem is one of choosing the right probabilities (to quantify his indexical uncertainty), rather than one of choosing a utility function with potentially different weightings for each possible future. So this would mean you agree with Wallace that "SU" is a valid perspective to take, and you disagree with Greaves that "OD" is the only valid perspective.

    But according to Wallace, the "SU" perspective allows you to make a strong argument for the validity of the Born rule that you can't make based on the "OD" perspective. (see page 19 of this paper). He claims that it's enough to prove "branching indifference" ("An agent is rationally compelled to be indifferent about processes whose only consequence is to cause the world to branch, with no rewards or punishments being given to any of his descendants."). He claims that from the "SU" viewpoint branching indifference is easily seen to be true; see the middle of page 19 for the argument. (As I understand it: if you add extra uncertainty about some branching process whose outcomes all have the same utility, this never changes the expected utility of any quantum process of which that branching process is a part.)

    Do you think this argument works? If so, it should prove that the Born rule gives you the only rational prior. (In particular, it should prove that uniform probabilities on worlds are irrational.)
    Last edited: Nov 25, 2005
  15. Nov 25, 2005 #14
    No, I think Wallace's argument confuses utilities and probabilities. Even if some branching doesn't influence certain decisions, your probabilities might still be influenced. It is just a mistake to bring up "indifference" when thinking about probabilities.

    Indexical uncertainty is certainly possible, so I don't see how one can claim the perspective of such uncertainty is invalid.

    By the way I should admit to being at least a bit bothered by your pointing out that there are far more future "world-selves" than past world-selves, and so a uniform prior over such selves greatly favors the future. Not sure how strong an issue it is though.
  16. Nov 25, 2005 #15
    This is a good paper for us to use in this thread. Having read page 19, I do not see how or why SU compells us to accept branching indifference. In particular I agree with Robin:

    But I'm going to sit down and read through the entire paper and see if there is something I've missed ...
  17. Nov 25, 2005 #16
    OK, I'm going through Wallace's arguments in favor of branch indifference, which is iiuc equivalent to measurement neutrality. The APP, ie "outcome counting," would, btw, represent a violation of branch indifference. On page 20, Wallace poses the following objection to any violation of branch indifference -- ie, he poses this objection to the APP:

    This objection seems unreasonable. My main objection to Wallace's above argument is that there is nothing in the APP to exclude the possibility that some sort of pattern to the branching structure might exist, such that the coarse-grained probabilities (ie, the Born rule) can be approximated without knowing the precise fine-grained structure. Indeed, this is what makes the APP interesting to me: we can theorize relationships between the fine-grained structure and the coarse-grained probabilities. Since we know the coarse-grained probabilities are the Born rule, then we are encouraged to search for a fine-grained structure, based on the APP, that is equivalent to a coarse-grained Born rule.

    A secondary objection to Wallace's argument is that the "plausible capabilities of any agent" are immaterial. When Mother Nature wrote Her most basic laws, I do not think she bothered to ask whether they were too complicated for us to understand!

  18. Nov 26, 2005 #17
    It looks like you're right: Wallace justifies "branching indifference" in terms of utilities, and he then uses it to prove "equivalence", which is phrased in terms of probabilities.

    But maybe this doesn't matter for the argument; maybe it still allows him to resolve the two lacunae he mentions at the start of section 9.

    I think Wallace's position is that probabilities should make sense both from the "SU" viewpoint where you're uncertain who you will become, and the "OD" viewpoint where you just look at the deterministic physics and determine how much you care about each branch. In his coinflip example on p19, if the extra branching event doesn't change the expected utility of the coinflip, then how is it possible that it changes the probability of either outcome?

    In the situation where the experiment has already taken place but you don't know the outcome, I agree that indexical uncertainty is obviously the right way to look at it. In that case there are multiple physical observers, and each of them doesn't know which one he is.

    But before the experiment has taken place, it's less obvious to me. In that case there is only one physical person, who knows for certain that he will split into multiple physical persons when the experiment takes place. To get something like indexical uncertainty there, you have to believe that there are in a sense already multiple persons before the experiment starts, each of which will observe a different outcome. (In other words the one physical person who exists before the experiment is a stage of many different person-histories, and each of these person-histories is uncertain (pre-experiment) which person-history it is.)
  19. Nov 26, 2005 #18
    If the number of worlds is multiplied by some number like 10^(10^30) between now and then, the prior will be some number like 10^(10^30) to 1. It's hard to see how even reliable evidence could beat that.
  20. Nov 26, 2005 #19
    Wallace is assuming subjective probability here: probabilities aren't features of nature, they're features of how we should rationally deal with our uncertainty. If we were 99% sure we should assign equal probabilities and 1% sure we should assign Born probabilities, and assuming it really is completely beyond our capacities to figure out what we want to do in the case world-counting matters (i.e. we would have no way to estimate the expected utility of one action as higher than that of some other action), then I think the 99% would just drop out of our calculations and in practice we would be making decisions based on the 1% chance we should assign Born probabilities.

    Having said that, I'm also not sure whether decision-making is always impossible if world counts matter.
  21. Nov 26, 2005 #20
    But how do you know that the extra branching doesn't change the expected utility of the coinflip? That's an assumption, but we could equally assume the APP, which tells us that extra branching event does necessarily change the probability (= expected utility) of the coinflip.
    I keep returning to the same conclusion that Patrick made in his paper: from a logical standpoint, we are free to postulate either the APP, or the Born rule, or some other rule. In my mind, the APP is more "beautiful" than the Born rule because of its symmetry; this, however, does not in and of itself tell us that the APP is "right" and the Born rule is "wrong." It's simply: more beautiful (to me, at least).

    It's the same argument that one might make for the principle of relativity. It just so happened that if we make the principle of relativity a requirement, then we end up with all sorts of new and wonderful things (ie, GR). Who would have guessed? So I'm speculating that if we assert the APP, and figure out how to make it compatible with known physics, then who knows where else it might take us?

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Are World Counts Incoherent?
  1. Who's counting? (Replies: 23)

  2. Counting Game (Replies: 56)

  3. Post Count (Replies: 2)