EmilyCA
- 6
- 0
Hi - I'm the author of the paper http://arxiv.org/abs/1504.01063 which was being discussed earlier in this thread. I guess the discussion has moved on a little, but I'm particularly interested in the discussion you're having now about classical vs many worlds probability.
The main point I tried to make in that paper is simply that if you believe all possible results of a measurement always occur, then you learn nothing new about the world when you observe a measurement result. Therefore in such a world no amount of measurement data could possibly serve to confirm any scientific theory, including quantum mechanics; so the Everett approach is self-undermining because it tells us we shouldn't even believe that quantum mechanics is correct.
I agree that the status of probability is problematic even in non-branching theories - as someone said a few pages back, `if you are unlucky enough to get a run of 20 (or 20,000) heads in a row while tossing coins, then you will come to a false conclusion about whether you have a fair coin.' Nonetheless in a probabilistic setting it is still possible to claim that probabilities, whatever they are, are somehow responsible for the unique sequence of outcomes that we have seen, and therefore we can reasonably expect that we are learning something about probabilistic laws from observing measurement results (while accepting that we might be making mistakes if we have been unlucky). In the Everett approach it is never reasonable to expect this, because there is no unique sequence of outcomes from which we could learn something.
The Deutsch-Wallace account of `probability' does nothing to change this fact - it shows us that there might still be something that looks like probability in the many worlds context, but fails to show that this something would be connected to truth or typicality in the right sort of way for us to actually learn about the world from it.
The main point I tried to make in that paper is simply that if you believe all possible results of a measurement always occur, then you learn nothing new about the world when you observe a measurement result. Therefore in such a world no amount of measurement data could possibly serve to confirm any scientific theory, including quantum mechanics; so the Everett approach is self-undermining because it tells us we shouldn't even believe that quantum mechanics is correct.
I agree that the status of probability is problematic even in non-branching theories - as someone said a few pages back, `if you are unlucky enough to get a run of 20 (or 20,000) heads in a row while tossing coins, then you will come to a false conclusion about whether you have a fair coin.' Nonetheless in a probabilistic setting it is still possible to claim that probabilities, whatever they are, are somehow responsible for the unique sequence of outcomes that we have seen, and therefore we can reasonably expect that we are learning something about probabilistic laws from observing measurement results (while accepting that we might be making mistakes if we have been unlucky). In the Everett approach it is never reasonable to expect this, because there is no unique sequence of outcomes from which we could learn something.
The Deutsch-Wallace account of `probability' does nothing to change this fact - it shows us that there might still be something that looks like probability in the many worlds context, but fails to show that this something would be connected to truth or typicality in the right sort of way for us to actually learn about the world from it.