A Does the MWI require "creation" of multiple worlds?

  • #151
A. Neumaier said:
since a world is not ''the universe'', and ''outcome'' is never defined, it doesn't really say anything about outcomes. The latter should be what comes actually out, which is a single result.

No, it's a single result for a single state of the apparatus. So, in the case described in my OP to this thread, ##|1> |U>## is one result (corresponding to the "up" state of the measured system) and ##|2> |D>## is another result (corresponding to the "down" state of the measured system). Both results occur because both terms appear in the final state of the total system (measured system + apparatus). "Outcome" is just another term for "result" as I have just defined it.
 
Physics news on Phys.org
  • #152
PeterDonis said:
No, it's a single result for a single state of the apparatus. So, in the case described in my OP to this thread, ##|1> |U>## is one result (corresponding to the "up" state of the measured system) and ##|2> |D>## is another result (corresponding to the "down" state of the measured system). Both results occur because both terms appear in the final state of the total system (measured system + apparatus). "Outcome" is just another term for "result" as I have just defined it.
Ah, so ''occurs'' means ''occurs in the decomposition'', not ''occurs when actually measuring the system''. So occurrence says nothing about measurement!?

The only work about MWI (among what I read) that I found precise enough about what is claimed about actual measurement was Everett's thesis (who didn't use the misleading term world) - and it contained circular reasoning. Everything else was either too vague, or collected mathematical trivialities and play with words that pretended to have to do something with the real thing, without having substantiated it. I have enough of MWI for the next 10 years, and will not continue the discussion here.
 
  • Like
Likes Lord Jestocost and Mentz114
  • #153
A. Neumaier said:
so ''occurs'' means ''occurs in the decomposition'', not ''occurs when actually measuring the system''.

The interaction that entangles the measured system and the apparatus is the measurement.
 
  • #154
A. Neumaier said:
I have enough of MWI for the next 10 years

I sympathize. :wink:

As I have said previously, the intent of this thread is not to argue whether MWI is right or wrong, but simply to get as clear an understanding as possible of what it says. Pointing out issues with it is really a separate discussion (and your articles are good contributions to any such discussion).
 
  • #155
PeterDonis said:
As I have said previously, the intent of this thread is not to argue whether MWI is right or wrong, but simply to get as clear an understanding as possible of what it says..
I think you've done that and it raises the question -
Can we say that the various forms of MWI are interpretations of the standard formalism ?

I don't think so, for several reasons
1. The non-unitary 'operator' postulate has been replaced
2. The 'splitting' ( I use quotes because I still don't know exactly what it means) is new physics and is not a natural extension
3. There is no proof that the predictions (should anyone work out what they are) agree with standard QM.

I don't see how replacing ##\cos(\Delta\theta)^2## with ##1/2## can not make different predictions.
The thing is that the expression above tells us that we cannot predict with certainty whether light passes a polarizer or gets absorbed with certainty unless ##\Delta\theta## is a multiple of ##\pi/2##. At all other angles there is unavoidable quantum indeterminacy. It seems that MWI wants to get rid of quantum indeterminacy.

This is a new theory in my opinion.
 
  • #156
Mentz114 said:
The non-unitary 'operator' postulate has been replaced

Are you referring to the "collapse" postulate? In other words, the postulate that says that, after a measurement is made and the result is observed, you use the "collapsed" wave function corresponding to the observed result to make future predictions?

If so, the MWI is compatible with this, because each of the terms in the superposition I wrote down in the OP will evolve independently (at least, it will if we assume that the final state is the one after decoherence). So an observer in either branch can use just the term for his branch and correctly predict all future measurement probabilities that he will observe.

Mentz114 said:
The 'splitting' ( I use quotes because I still don't know exactly what it means) is new physics

No, it isn't. That was the main point of this thread. It's just unitary evolution, and unitary evolution is part of standard QM. There's nothing added on.

Mentz114 said:
There is no proof that the predictions (should anyone work out what they are) agree with standard QM

I'm not sure why such a proof would be needed, since the MWI uses the same math of standard QM as all other interpretations.

Mentz114 said:
I don't see how replacing ##\cos(\Delta\theta)^2## with ##1/2## can not make different predictions

You've lost me. What experiment are you referring to? It doesn't seem to be the one I described in the OP to this thread, which is the one I would like discussion to be focused on.
 
  • #157
Mentz114 said:
It seems that MWI wants to get rid of quantum indeterminacy.

Quantum indeterminacy only arises if you assume that only one result actually occurs. The MWI says that all possible results occur. So yes, there is no quantum indeterminacy in the MWI.

The problem the MWI has in this regard is explaining how the Born rule comes about and why it works.
 
  • #158
PeterDonis said:
Are you referring to the "collapse" postulate? In other words, the postulate that says that, after a measurement is made and the result is observed, you use the "collapsed" wave function corresponding to the observed result to make future predictions?

If so, the MWI is compatible with this, because each of the terms in the superposition I wrote down in the OP will evolve independently (at least, it will if we assume that the final state is the one after decoherence). So an observer in either branch can use just the term for his branch and correctly predict all future measurement probabilities that he will observe.
OK, I consider myself rebutted.
You've lost me. What experiment are you referring to? It doesn't seem to be the one I described in the OP to this thread, which is the one I would like discussion to be focused on.
Polarizing beam splitter. One input, two outputs with probabilites ##\cos(\Delta\theta)^2## and ##\sin(\Delta\theta)^2##
I think your post uses two equal probability outcomes which masks the problem to a large extent.
I understand what you wrote up to splitting, when suddenly there are two outcomes instead of one.
 
Last edited:
  • #159
PeterDonis said:
Quantum indeterminacy only arises if you assume that only one result actually occurs. The MWI says that all possible results occur. So yes, there is no quantum indeterminacy in the MWI.
The equations that predict so well all experimental results assume indeterminacy and it is considered unimportant enough to drop ?
 
  • #160
Mentz114 said:
The equations that predict so well all experimental results assume indeterminacy and it is considered unimportant enough to drop ?
Far too important not to drop, if, in fact, it is possible to do so. Which is kind of the point of Everett's work - to eliminate any reliance on physical indeterminacy. The projection postulate becomes a theorem (some would deny this, of course) based only on unitarity. Physical indeterminacy is superfluous; observed indeterminacy is derived in the physical argument that underpins MWI. Which I am not going to regurgitate here!
Of course indeterminacy of the properties themselves doesn't come into this. They don't even exist in the WF.
 
  • #161
Mentz114 said:
Polarizing beam splitter

Ah, ok. Yes, with arbitrary coefficients that's the same experiment, schematically, as the one I describe in the OP. See below.

Mentz114 said:
I think your post uses two equal probability outcomes

It does, but that's not essential to the question I was asking. You can substitute arbitrary complex numbers ##a## and ##b## such that ##|a|^2 + |b|^2 = 1## for the two coefficients if you want.
The point is that, in the MWI, both outcomes happen: both branches of the superposition occur. That's true whether the coefficients are equal or not.

The issue is how you assign any physical meaning to the coefficients in the MWI; that's another way of putting the question about the Born rule that I mentioned in my last post. Or, to put it another way, how do you figure out what the coefficients are in the state that is being prepared by some apparatus that you just came across? The Born rule is an essential part of doing that, so if it can't be justified under the MWI, then that's an issue with the MWI. MWI proponents seem to recognize this and have devoted quite a bit of effort in attempts to derive the Born rule within the MWI. I don't think any of those attempts are judged to be completely successful.

Mentz114 said:
I understand what you wrote up to splitting, when suddenly there are two outcomes instead of one.

That's because of the unitary evolution of the state. The transformation that takes the input state in my OP to the output state in my OP is unitary. So evidently, with the definition of "outcomes" that I gave, a unitary operation can transform one outcome into two outcomes.

But using the word "splitting" to describe that process makes it sound as though this "outcome multiplication" somehow copies something, or manufactures a new "outcome" out of nowhere; the point of my OP to this thread is that that intuitive reasoning from the word "splitting" is not valid. A unitary operation does not create or destroy anything. It just transforms one pure state vector into another pure state vector. The "splitting" is in the description we choose to apply to the process, not in the process itself. So is the word "outcome". Nature doesn't care whether we label the terms in the superposition as "outcomes" or not.
 
  • #162
Mentz114 said:
The equations that predict so well all experimental results assume indeterminacy

No, they don't. They assume that the coefficients of the terms in a superposition predict the probabilities for the corresponding measurement results to be observed. And they tell you that after the measurement, you use the reduced state vector, the single term that describes the result you actually observed, to predict the probabilities of future measurement results. None of that requires "indeterminacy" to be anything fundamental.

Or perhaps the issue is that you are using "indeterminacy" to simply mean "the equations can't predict the actual result you will observe; they can only predict probabilities". In that case, I was wrong to say the MWI doesn't have "indeterminacy"; in that sense, it does. But on this interpretation of "indeterminacy", "indeterminacy" is perfectly compatible with all of the possible measurement results actually occurring.
 
  • Like
Likes Derek P
  • #163
Stephen Tashi said:
The common interpretation of a probabilistic situation is that there are several "possible" outcomes and only one of them "actually' occurs. In the MWI (and other interpretations of QM that do not allow collapse of wave functions) is such an interpretation possible?

If so, what kind of events are the "possible" outcomes - of which only one "actually" occurs?

If not, then what is the physical interpretation of probability?
Please refer to the "Absolute chance", "A splitting universe", and "Probability interpretation" section of the paper "Quantum mechanics and reality" by Bryce DeWitt. I won't post the mathematics of his work (which can be found in the paper), but here is the quote I'm sure you're looking for:

"The state vector at the end of the coupling interval again takes the form of a linear superposition of vectors, each of which represents the system observable as having assumed one of its possible values. Although the value varies from one element of the superposition to another, not only do both apparatuses within a given element observe the value appropriate to that element, but also, by straightforward communication, they agree that the results of their observations are identical. The splitting into branches is thus unobserved."

If you have access to the book/journal "Battelle Rencontres, 1967 Lectures in Mathematics and Physics" look for his article ""The Everett-Wheeler Interpretation of Quantum Mechanics"
 
  • #164
kith said:
The current state of affairs seems to be the 2009 paper "A formal proof of the Born rule from decision-theoretic assumptions" by David Wallace.

Some thoughts about that approach:

1. It's often been said on this forum that the act of observation in QM does not require having conscious observer. That paper interprets probability in terms of actions by a "rational" agent. I suppose a rational agent need not be conscious, but the rational agent needs to have preferences that define a utility function and the rational agent needs to make some decisions. So it seems the existence of probability depends on the existence of "higher mental functions".

2. The motivation for the rational agent in the informal discussion of the paper portrays the agent visualizing that he will exist on one of the branches of a tree of several possible outcomes, but not knowing which one. So, if the experience of one rational agent is defined to be a "world", the agent is visualizing situation where there are several possible "worlds" for him and he will "actually" enter only one of those worlds.

3. Do probabilities exist only in situations where an actual rational agent is making a decision? or do they exist in any situation where we may imagine a rational agent making a decision? By analogy, a force field at (x,y,z) might be defined by an equation that gives the force on a hypothetical unit test mass placed at (x,y,z), but we consider the field to exist even if no actual unit test mass is ever placed there. A weakness in that analogy is that a "unit test mass" is a specific physical quantity, but a "rational agent" is not. For example, the decision algorithm that implements the behavior of a rational agent might be executed on different types of computer hardware or by biological systems.

4. Does defining probability in terms of the decisions of a rational agent imply that only the Bayesian vew of probability is correct? Doesn't the dependence of probability on an agent's utility function imply there is no objective probability in physics?
 
  • Like
Likes kith
  • #165
romsofia said:
Please refer to the "Absolute chance", "A splitting universe", and "Probability interpretation" section of the paper "Quantum mechanics and reality" by Bryce DeWitt.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.729.5437&rep=rep1&type=pdf

There might be some hope of interpreting the mathematics if I can overcome the fundamental language difficulty in the MWI viewpoint - namely: How can we refer to anything that persists in time in the singular? For example, on p162 we read:
If the apparatus observes each system exactly once, in sequence, then ...

But if "the" apparatus is making a sequence of observations that can have different outcomes, it is not the "same" apparatus after an observation. Keeping track of the branching of the original apparatus is done by keeping a "memory sequence". If we find a record of measurements beginning with the "same" apparatus A0 and having the same memory sequence, do we conclude the two records are for the same apparatus A1 which is on a branch descending from A0? Or can their be some "incidental" outcome of the situation not recorded in the memory sequence that results in two distinct apparati A1 and A2 that both descending from A0 and both having the same memory sequence?
 
  • #166
Stephen Tashi said:
1. It's often been said on this forum that the act of observation in QM does not require having conscious observer. That paper interprets probability in terms of actions by a "rational" agent. I suppose a rational agent need not be conscious, but the rational agent needs to have preferences that define a utility function and the rational agent needs to make some decisions. So it seems the existence of probability depends on the existence of "higher mental functions".
I agree. If you remove the observer from the MWI you run into the problem of this thread: there's doesn't seem to be a sensible way to speak of multiple worlds (because their number depends on the choice of subspaces / bases) nor of the splliting of a world (because decoherence is continuous). I think that the only sensible notion of "world" in the MWI is the world of experience of an observer.

Stephen Tashi said:
2. The motivation for the rational agent in the informal discussion of the paper portrays the agent visualizing that he will exist on one of the branches of a tree of several possible outcomes, but not knowing which one. So, if the experience of one rational agent is defined to be a "world", the agent is visualizing situation where there are several possible "worlds" for him and he will "actually" enter only one of those worlds.
The state vector of the universe also has terms where "the agent" has different experiences. If you simply discard theses terms as physically meaningless, you are talking about Copenhagen, not the MWI. If you take them as physically real, after the measurement, there are now two agents with mutually exclusive experiences (and no ability to interact).

For me, this outside view isn't sufficiently reconciled with the inside view of "the agent" before and after the measurement. But it's been a while since I had a look at Wallace's paper und I would need to study it more in detail to pinpoint this.

Stephen Tashi said:
3. Do probabilities exist only in situations where an actual rational agent is making a decision? or do they exist in any situation where we may imagine a rational agent making a decision?
I'm not sure if I understand the importance of the difference between these two viewpoints.

Stephen Tashi said:
By analogy, a force field at (x,y,z) might be defined by an equation that gives the force on a hypothetical unit test mass placed at (x,y,z), but we consider the field to exist even if no actual unit test mass is ever placed there.
At least part of considering the (classical electromagnetic) field to be real is convenience. If you try to remove the field in favor of interactions between particles, you get ugly equations because of retardation. Another important reason for considering the field to be real is the existence of waves in the absence of particles (although one might go on to argue that these are not real either, only their impact on particles is).

Stephen Tashi said:
4. Does defining probability in terms of the decisions of a rational agent imply that only the Bayesian vew of probability is correct? Doesn't the dependence of probability on an agent's utility function imply there is no objective probability in physics?
Yes, I think that taking this viewpoint leads to this conclusion.
 
Last edited:
  • #168
A. Neumaier said:
when, "for all practical purposes", there is only one world and when there are several. Or are there from the beginning infinitely many worlds?
Both options are valid. There is another approach to understanding worlds in WMI which I don’t think has been brought up yet in this thread and some may find more appealing.

Instead of world splitting you can consider the wavefunction to be a continuum of pre-existing worlds and a measure of their density. As they evolve, those worlds diverge instead of split. This view is like Bohmian mechanics where every point is equally real - focusing on one world-particle, the other worlds together form a guiding wave.
 
  • #169
Stephen Tashi said:
4. Does defining probability in terms of the decisions of a rational agent imply that only the Bayesian vew of probability is correct? Doesn't the dependence of probability on an agent's utility function imply there is no objective probability in physics?

A lot of the discussions about probability in QM ultimately are about the nature of probability, whether quantum or not. QM gives some additional twists to it, but the concept is pretty tricky classically, as well. My opinion about it is that the alternative to Bayesianism, frequentist probability, doesn't actually make complete sense. You can say that probabilities are relative frequencies in the limit as the number of trials goes to infinity. But there is nothing in the laws of physics that make it impossible for an infinite sequence of tosses of a fair coin to approach 1/3 or 9/10 or anything else (or to have no limit at all). The best you can say is that the probability of having a sufficiently large sequence of coin flips where the relative frequency differs appreciably from the probability for a single flip goes to zero as the number of flips goes to infinity. So the equation of relative frequency with probability only holds with probability 1. But the latter notion of probability isn't frequentist. To make sense of it in a frequentist approach, we would need infinitely many trials, each trial of which is an infinite sequence of coin flips.

There's another disturbing fact about even classical probability. Imagine that the universe is infinite, and with a certain quasi-periodic completeness property: there are infinitely many planets that are exactly like the Earth (at least in macroscopic detail). In this setting, if I flip a coin, it's expected that there are copies of the Earth where the copy of my flips a coin and gets "heads", and there are copies where the result is "tails". If I flip a coin 100,000 times, there will be Earths where all 100,000 were heads, and Earths where 100,000 were tails, and all other combinations in between. What we can probably say is that the density of Earths where between 45,000 and 55,000 were heads will be much higher than the density of Earths where the results will be outside of that range. But there will be Earths outside that range. No matter how unlikely a sequence of events is, as long as it's nonzero, there will be some place where that exact sequence happens. So the identification of probability with relative frequency will work for some Earths, but not others. If you say the life span of intelligent life on Earth is maybe bounded by 20 billion years, there will be Earths where the entire history of intelligent life will show departures of relative frequencies from probabilities.

What you could say is that there is a "typical" Earth-like world, where frequencies approach probabilities, and just not worry about atypical worlds. But that amounts to assuming that we are not in an atypical world. What's the reason for assuming that? No matter what evidence we can have for our world being typical, there will be copies of our Earth with the same evidence that are atypical. So at some point, you just have to assume, without evidence, that our world is typical (otherwise, probabilities are useless). But that assumption is really subjective---it's not a firm conclusion based on evidence.

So in my opinion, subjective beliefs always come into play in working with probability, even if it is just the subjective belief that probabilities will be objectively equal to relative frequencies.
 
  • #170
stevendaryl said:
So in my opinion, subjective beliefs always come into play in working with probability, even if it is just the subjective belief that probabilities will be objectively equal to relative frequencies.

One of the most common assumptions (subjective beliefs) is exchangeability: https://en.wikipedia.org/wiki/Exchangeable_random_variables.
 
  • #171
stevendaryl said:
A lot of the discussions about probability in QM ultimately are about the nature of probability, whether quantum or not. QM gives some additional twists to it, but the concept is pretty tricky classically, as well. My opinion about it is that the alternative to Bayesianism, frequentist probability, doesn't actually make complete sense.

Yes, probability is vexing concept, with or without QM. Both Bayesian and frequentist probability use the intuititive notion that there is a situation where several outcomes are "possible" and only one outcome "actually" happens. In Bayesian approach the probability is assigned to events based on information or belief. In the frequentist approach, we imagine probability to be an objective property. However, both approaches fail to formalize the process of "possible" things becoming "actual". They both treat probability as a mathematical measure. Measure theory has no axioms about "actual" versus "possible" things. It doesn't even have an axiom that says it is possible to take random samples.

So in my opinion, subjective beliefs always come into play in working with probability, even if it is just the subjective belief that probabilities will be objectively equal to relative frequencies.

Some physicists dissatisfied with the limitations of measure theory make that an explicit assumption. (I think this concept is called "physical probability"). For example, Lucien Hardy https://arxiv.org/abs/quant-ph/0101012 assumes:
Axiom 1 Probabilities.
Relative frequencies (measured by taking the proportion of times a particular outcome is observed) tend to the same value (which we call the probability) for any case where a given measurement is performed on a ensemble of n systems prepared by some given preparation in the limit as n becomes infinite

This resembles the Law of Large Numbers, except that it says relative frequencies approach the probability - guaranteed! - always, definitely, no mathematical dissembling about limits of probabilities of frequencies approaching 1.

I don't expect the MWI to settle the conceptual problem of probabilities. I'm just curious whether "probability" in the MWI is some radical departure from the notion of "many possible" and "only one actual". As far as I can see, it is not. Using a rational agent to define probabilities appears to postulate an agent who has the concept of "many possible" branches and "only one actual" branch that the agent's descendant will take. In a manner of speaking, the theory as a whole lacks probabilities, but if we take the viewpoint of one agent experiencing the consequences of theory, we can introduce probabilities by taking the viewpoint of that agent.
 
  • Like
Likes Derek P
  • #172
PeterDonis said:
The interaction that entangles the measured system and the apparatus is the measurement.
Not really, as it takes time to get from the initial state to the final state. Thus one would have to specify a time when the measurement result is read, and to prove that the prescription given is robust, i.e., does not significantly depend on the precise reading time.
 
  • #173
stevendaryl said:
So in my opinion, subjective beliefs always come into play in working with probability, even if it is just the subjective belief that probabilities will be objectively equal to relative frequencies.
By the same token, subjective believes come into play in all of physics, even if it is just the subjective belief that the assumptions of our physical theories are valid.
 
  • #174
A. Neumaier said:
By the same token, subjective believes come into play in all of physics, even if it is just the subjective belief that the assumptions of our physical theories are valid.

That's fine, and certainly true. But it seems to me that in the case of the application of probability theory, there is an additional assumption to be made beyond assuming what the laws of physics are, which is that we are in a typical world. That's a different type of assumption.
 
  • #175
stevendaryl said:
which is that we are in a typical world. That's a different type of assumption.
... needed (and meaningful) only in MWI. But your arguments in #171 were independent of the interpretation, and partly even classical.
 
  • #176
A. Neumaier said:
... needed (and meaningful) only in MWI.

That's actually not true. We have to make similar assumptions in classical physics, but they are just not made explicit.
 
  • #177
stevendaryl said:
That's actually not true. We have to make similar assumptions in classical physics, but they are just not made explicit.
No. In classical physics, we approximate probabilities by relative frequencies, in the same way as we approximate exact positions by measured positions. By regarding a relative frequency as an approximate measurement of the exact probability, no assumption about alternative worlds need to be made.
 
  • #178
A. Neumaier said:
No. In classical physics, we approximate probabilities by relative frequencies

How do you know that's a good approximation? You don't. In classical probabilities, a sequence of flips of a fair coin can give you a relative frequency of anything between 0 and 1. We assume that if we flip enough times, then the relative frequency will settle down to 1/2 in the case of a fair coin. But that is the assumption that our world is a "typical" one.
 
  • Like
Likes Stephen Tashi
  • #179
stevendaryl said:
How do you know that's a good approximation?
How do you know it in case of high precision measurements of position? One generally assumes it without further ado, and corrects for mistakes later.

stevendaryl said:
In classical probabilities, a sequence of flips of a fair coin can give you a relative frequency of anything between 0 and 1.
In theory but not in practice. If one flips a coin 1000 times and finds always head, everyone assumes that the coin, or the flips, or the records of them have been manipulated, and not that we were lucky or unlucky enough to observe a huge statistical fluke.

We draw conclusions about everything we experience based on observed relative frequencies on a sample of significant size, and quantify our remaining uncertainty by statistical safeguards (confidence intervals, etc.), well knowing that these sometimes fail. Errare humanum est.

stevendaryl said:
But that is the assumption that our world is a "typical" one.
Nobody ever before the advent of MWI explained the success of our statistical reasoning by assuming that our world is a typical one. Indeed, if there are other worlds, we cannot have an objective idea at all about what happens in them, only pure guesswork - all of them might have completely different laws from what we observe in ours. Hence any statements about the typicalness of our world are heavily biased towards what we find typical in our only observable world.
 
  • #180
A. Neumaier said:
How do you know it in case of high precision measurements of position? One generally assumes it without further ado, and corrects for mistakes later.

I can't tell whether you actually have a disagreement, or not.

In theory but not in practice. If one flips a coin 1000 times and finds always head, everyone assumes that the coin, or the flips, or the records of them have been manipulated, and not that we were lucky or unlucky enough to observe a huge statistical fluke.

That's the assumption that our world is "typical". So you're both making that assumption and denying it, it seems to me.
 
  • #181
stevendaryl said:
That's the assumption that our world is "typical". So you're both making that assumption and denying it, it seems to me.
No.

In common English, to call something typical means that one has seen many similar things of the same kind, and only a few were very different from the typical instance. So one can call a run of coin flips typical if its frequency of heads is around 50% and atypical if it was a run where the frequency is outside the $5\sigma$ threshold required, e.g., for proofs of a new particle (see https://physics.stackexchange.com/questions/31126/ ), with a grey zone in between.

This is the sense I am using the term. All this happens within a single world. It is not the world that is typical but a particular event or sequence of events.

But I have no idea what it should means for the single world we have access to to be ''typical''. To give it a meaning one would have to compare it with speculative, imagined, by us unobservable, other worlds. Thus calling a world typical is at the best completely subjective and speculative, and at the worst, completely meaningless.
 
  • Like
Likes PeterDonis
  • #182
A. Neumaier said:
No.

In common English, to call something typical means that one has seen many similar things of the same kind, and only a few were very different from the typical instance. So one can call a run of coin flips typical if its frequency of heads is around 50% and atypical if it was a run where the frequency is outside the $5\sigma$ threshold required, e.g., for proofs of a new particle (see https://physics.stackexchange.com/questions/31126/ ), with a grey zone in between.

This is the sense I am using the term. All this happens within a single world. It is not the world that is typical but a particular event or sequence of events.

But I have no idea what it should means for the single world we have access to to be ''typical''. To give it a meaning one would have to compare it with speculative, imagined, by us unobservable, other worlds. Thus calling a world typical is at the best completely subjective and speculative, and at the worst, completely meaningless.
Just remember, hair-splitting is irrelevant to world-splitting.

Funnily enough I can understand Steven's language in what appears, admittedly to my vague sort of mind, to be perfectly well-defined terms. Personally I translate "typical" into something useful about confidence limits.
 
Last edited:
  • #183
stevendaryl said:
There is no collapse in Many Worlds.
aren't the many worlds theoretical-
 
  • #184
Derek P said:
? @StevenDarryl was describing the smooth evolution of the emergent worlds. It was not even remotely a reformulation of MWI.

You may believe so but MWI asserts exactly the opposite.
See this article - but only if you don't mind Vongher's provocative style.
that article above--See this article--- - fails simply because the use of Wikipedia makes research infotainment. Plus a lot of thought experiments. Neumaier has it spot on-
 
Back
Top