Understanding MWI: A Newbie's Guide to Quantum Physics and the Multiverse

  • Thread starter Thread starter confusedashell
  • Start date Start date
  • Tags Tags
    Mwi
  • #151
Vanesh, in your view... do you ever attempt to explain where the complex amplitude formalism comes from, or is it just taken to be a fact, and you are trying to interpret it?

What I suggest, is that there may ultimately be an explanation to exactly what a superposition means. And I suggest that it can be seen as a statistical mixture, but not of probability distributions in the one and same space, but rather in a related space. The easiest example is the momentum space with a phase. These two spaces are related by a Fourier transform - but why Fourier transform? Is there an answer to this? I think so.

What is a superposition in one space, is just a simple mixture in another space. so what I tried to suggest above is that in P(x|b or c)

The formula
<br /> P(x|b \vee c) = \left[P(x|b)P(b) + P(x|c)P(c) - P(x|b \wedge c)P(b \wedge c) \right]\frac{1}{P(b \vee c)}<br />

Does NOT apply if the condition doesn't live in the same space as x does. So probability theory here does not really fail, it's possibly just IMO misapplied.

What I'm suggesting is thus that the operation of making a union of conditions - defined in a related space, needs to be transformed to make sense. This I see analogous to the fact that you need to parallelltransport vectors from different tangentspaces before you can add them. Adding vectors from different tangentsspaces by coordinate is nonsense.

I am working on this and have not completeed the formalism yet, but maybe you see that there is at least a possibility to accomplish this? If I make it or not will be evident in time. But I am very optimistic.

/Fredrik
 
Physics news on Phys.org
  • #152
So I see two issues.

1) the superposition vs mixture issue can *I think* be solve by transforming the events to the same event space before making the union of the condition. However to understand the consistency of reasoning of tihs approach (ie to motivate it) I think one needs to consider the uncertainty of the probabiltiy space in a larger context which actually connects to the second point

2) The uncertainty in establishing your probability space, and how this is constrained by the memory capacity, and the implication that incoming information must be balanced by releasing information, unless the "mass" is increasing... which can happen, and it can also shirnk... which is sort of the third issue... but there may be a way to wrap this up.

At minimum and early level this should unless it's a total failure

1) explain the mystery of the "complex wavefunction" and how it can be introduced from first principles as something unique

2) explain the relation between mass and information capacity and how it relates to inertia

3) Explain the arrow of time as a sort of simple diffusion over a network of related probabiltiy spaces.

I think this can unite standard formalism from statistical mechanics with the QM stuff, and probably get gravity for free.

Either it works, or it doesn't.

/Fredrik
 
  • #153
Fra said:
Vanesh, in your view... do you ever attempt to explain where the complex amplitude formalism comes from, or is it just taken to be a fact, and you are trying to interpret it?

Yes. I take it as given, and I try to make sense of it. That's why I call it "interpretation of quantum theory", and not "develloping quantum theory further".

One might try to modify/improve/change/... quantum theory into something vaster, but my bet is that without any experimental hint what so ever, this is a risky (and IMO futile) undertaking. For most practical purposes, quantum theory as it stands is "good enough", and so the only thing that is really needed is a STORY around it, that makes its principles stand out more clearly (even though the story might sound crazy), and as thus, help you understand the *machinery*.
By no means, I consider it a "final story", and by no means I think that quantum theory is "the ultimate theory", but I have no alternatives, and it is good enough.

I think it has something mentally healthy to it to have a theory that is "good enough" but of which you suspect that it is not "the final word" (without secretly having an idea of how that final word should look like, but to be honestly agnostic about it) and to have a totally crazy story about the workings of this theory that illustrates perfectly how the theory works (and which might, or might not, be true). It helps one to put things - especially physics - in perspective.

It avoids one to enter into a kind of delirium of "knowing the ultimate truth" and grandiose (and childish) ideas about knowing the "meaning of life, the universe and everything"... and it makes for amazingly good stories around the camp fire :smile: On top of that, it works very well in practice! And, like with all good ghost stories... you never know !
 
  • #154
I think I understand your perspective a little better now.

To speak for myself, If I didn't have the (not matter how naive! :) idea that i could improve the current theory and a hint of how to do it, I would be perfectly satisfied with the "shut up and calculate" philosophy.

Without such an ambition of improvement or extension, the best "understanding" of the current models is to me to simply understand the historical development of the theory. The history of science and physics is interesting and it's not that hard to imagine how ideas has developed.

/Fredrik
 
  • #155
If we do not know what something is doing, then it is allowed to do everything possible simultaneously. In the case of the double slit experiment, we do not know whether the electron passed through the left slit or the right slit, so we assume that it passed through both slits simultaneously. Each possibility being a "state", and because the electron fullfills both possibilities it is said to be in a "superposition of states."


So MWI claims that the electron has two definite choices-either it passes through the left slit or the right slit-at which point the universe divides into two universes, and in one universe the electron goes through the left slit, and in the other universe the electron goes through the right slit. These two universes somehow interfere with each other causes the intereference pattern. So whenever an object has the potential to enter one of several possible states, the universe splits into many universes, so that each potential is fulfilled in a different universe.
 
  • #156
ripcurl1016 said:
If we do not know what something is doing, then it is allowed to do everything possible simultaneously. In the case of the double slit experiment, we do not know whether the electron passed through the left slit or the right slit, so we assume that it passed through both slits simultaneously. Each possibility being a "state", and because the electron fullfills both possibilities it is said to be in a "superposition of states."

Well, I don't know if this is what you are saying, but it illustrates a point in the discussion with reilly.

There is a big difference between: "not knowing in what state a system is" (but assuming it is in one or another), and knowing (or not) that the system is in a *superposition* of states.

The first one (not knowing in which state...) is a statement about my knowledge, and enters in a probability description (with a Bayesian view). The second one is about an objective phsysical property.

In the first case, we can "treat all possible cases": we can "loop over" all possible states the system could be in, and reason under each hypothesis, to come to a final conclusion with uncertainties (because we have uncertainties about which statement was correct).

This was, in our example, the "uncertainty" about whether the particle went through the left or the right slit. But it means that we can say:
A) let's assume it was the left slit, blah blah...
B) let's assume it was the right slit, bluh bluh...

and so the endresult is 50% chance to have blahblah and 50% chance to have bluh bluh.

But in the second case (objective physical state), the *superposition of the two states* is a different physical state than "left state" or "right state" and lack of knowledge. It is as different as any other state from these two. So the superposition is then NOT a "lack of knowledge" on my part, but a physically different situation, which can best be seen that the particle goes through both slits at the same time (which is of course physically different from "the particle goes through slit 1" and from "the particle goes through slit 2").

The wavefunction (in collapse interpretations) dances between both. Indeed, at the moment of observation, the wavefunction (written in the "right" basis!) is clearly generating a "probability distribution", that is: it is saying something about a lack of knowledge on our part: namely WHICH outcome we're going to obtain. The wavefunction at the moment of detection doesn't seem to tell us that the particle is AT THE SAME TIME at x1, at x2, at x3, at x4 ... but rather that it will be measured at x1 OR at x2 OR at x3... and that we don't KNOW this.

But when it was at the slits, it wasn't telling us that the particle went through slit 1 OR through slit 2, but rather that it did something different (most reasonably go through both slits at once).

In classical settings, a lack of knowledge is usually NOT assumed to correspond to a DIFFERENT state than the "possible ones". If Joe has thrown a dice, but he didn't tell me the outcome, I assume that the result was 1, OR 2 OR 3 OR... OR 6, but not some state different from these 6 possible ones. I can deduce everything by assuming first that he threw 1, then assuming that he threw 2 ... and in the end, weight all possible results with a probability 1/6 (fair dice).
For instance, if I would assume that he had ALL RESULTS AT ONCE, I would probably have a totally different outcome (like Joe writing to the local newspaper of having seen something quite amazing!).
 
  • #157
You are right to point out the inadequacy of my density matrix example.

After a good bit of thought, I still conclude that your convincing argument about conditional probabilities is, in fact incorrect.

It all boils down to the requirement that if

P(X) = P(X|A) P(A) + P(X|B) P(B)

then events A and B must be disjoint and independent. That is, they cannot occur simultaneously, so there is no interference involved in the above probability scheme.

In the quantum situation, the situation is very different. For all practical purposes there are three independent( well almost) distinct events for the two slit expt., and they are the initial experimental conditions, slit A open, or slit B open, or slits A and B open. Properly normalized and orthogonalized, the QM measurement pattern will be reproduced as the sum of conditional QM probs based on the three slit states.

Regards,
Reilly

vanesch;r1522995 said:
Well, for the very reason I repeat again. If we take the wavefunction of the particle, and we let it evolve unitarily, then at the slit, the wavefunction takes on the form:
|psi1> = |slit1> + |slit2>
which are essentially orthogonal states at this point (in position representation, |slit1> has a bump at slit 1 and nothing at slit 2, and vice versa).

Now, if this is to have a *probability interpretation*, then we have to say that at this point, our particle has 50% chance to be at slit 1 and 50% chance to be at slit 2, right ?

A bit later, we evolve |psi1> unitarily into |psi2> and this time, we have an interference pattern. We write psi2 in the position representation, as:
|psi2> = sum over x of f(x) |x> with f(x) the wavefunction.

This time, we interpret |f|^2 as a probability density to be at point x.

Now, if at the first instance, we had 50% chance for the particle to be at slit 1, 50% chance to be at slit 2, then it is clear that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2), because this is a theorem in probability theory:

P(X) = P(X|A) P(A) + P(X|B) P(B)

if events A and B are mutually exclusive and complete, which is the case for "slit 1" and "slit 2".

But we know very well that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2) is NOT true for an interference pattern!

So in no way, we can see |psi1> as a probability density to have 50% chance to go through slit 1 and 50% chance to go through slit 2.



The point is that a pure state, converted into a density matrix, after diagonalisation, always results in a TRIVIAL density matrix: zero everywhere, and a single ONE somewhere on the diagonal, corresponding to the pure state (which is part of the basis in which the matrix is diagonal).

As such, your density matrix will simply tell you that you have 100% probability to be in the state...

If you don't believe me, for a pure state, we have that rho^2 = rho. The only diagonal elements that can satisfy this are 0 and 1. We also have that Tr(rho) = 1, hence we can only have one single 1.
 
  • #158
You are right to point out the inadequacy of my density matrix example.

After a good bit of thought, I still conclude that your convincing argument about conditional probabilities is, in fact incorrect.

It all boils down to the requirement that if

P(X) = P(X|A) P(A) + P(X|B) P(B)

then events A and B must be disjoint and independent. That is, they cannot occur simultaneously, so there is no interference involved in the above probability scheme.

In the quantum situation, the situation is very different. For all practical purposes there are three independent( well almost) distinct events for the two slit expt., and they are the initial experimental conditions, slit A open, or slit B open, or slits A and B open. Properly normalized and orthogonalized, the QM measurement pattern will be reproduced as the sum of conditional QM probs based on the three slit states.

Regards,
Reilly

vanesch;r1522995 said:
Well, for the very reason I repeat again. If we take the wavefunction of the particle, and we let it evolve unitarily, then at the slit, the wavefunction takes on the form:
|psi1> = |slit1> + |slit2>
which are essentially orthogonal states at this point (in position representation, |slit1> has a bump at slit 1 and nothing at slit 2, and vice versa).

Now, if this is to have a *probability interpretation*, then we have to say that at this point, our particle has 50% chance to be at slit 1 and 50% chance to be at slit 2, right ?

A bit later, we evolve |psi1> unitarily into |psi2> and this time, we have an interference pattern. We write psi2 in the position representation, as:
|psi2> = sum over x of f(x) |x> with f(x) the wavefunction.

This time, we interpret |f|^2 as a probability density to be at point x.

Now, if at the first instance, we had 50% chance for the particle to be at slit 1, 50% chance to be at slit 2, then it is clear that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2), because this is a theorem in probability theory:

P(X) = P(X|A) P(A) + P(X|B) P(B)

if events A and B are mutually exclusive and complete, which is the case for "slit 1" and "slit 2".

But we know very well that |f|^2 = 0.5 P(x|slit1) + 0.5 P(x|slit2) is NOT true for an interference pattern!

So in no way, we can see |psi1> as a probability density to have 50% chance to go through slit 1 and 50% chance to go through slit 2.



The point is that a pure state, converted into a density matrix, after diagonalisation, always results in a TRIVIAL density matrix: zero everywhere, and a single ONE somewhere on the diagonal, corresponding to the pure state (which is part of the basis in which the matrix is diagonal).

As such, your density matrix will simply tell you that you have 100% probability to be in the state...

If you don't believe me, for a pure state, we have that rho^2 = rho. The only diagonal elements that can satisfy this are 0 and 1. We also have that Tr(rho) = 1, hence we can only have one single 1.
 
  • #159
Reilly, even if you use the proper formula I suggested above that accounts for the fact that in general A and B are not generally disjoint, it is not straightforward to interpret the terms in terms of the wavefunctions. Can you do it?

My suggestion is that the resolution is that the expression fails because x and A really doesn't refer to the same space. To explain this explicity it gets complicated though and the dynamics of the probability space need to be accounted for for it to make complete sense.

When I have finished this explicitly I'll post it. Due to limited time I suspect it will take me to next year. This will restors a consistent probability interpretation, and the "trick" is to consider dynamical and relational probability spaces.

/Fredrik
 
  • #160
Fra said:
it is not straightforward to interpret the terms in terms of the wavefunctions

This is however only one problem, and most certainly the easiest part.

The other thing is to show that not only does the wavefunction make sense, I'd also like to show where it comes from. I think this can be done too, and this is where it gets more complicated. I think of it as a spontaneous decomposition of the microstructure into dual structures that are related. And I'm trying to understand/explain why this is spontaneous and what the mechanism is. In essence I think it's repartitioning of the observers data record, and the particular repartitioning selected, is simply more probably given the current incompletness. I haven't done it yet though, and maybe I'm wrong. But at least I'm convinced enough to take the risk of beeing wrong.

/Fredrik
 
  • #161
reilly said:
It all boils down to the requirement that if

P(X) = P(X|A) P(A) + P(X|B) P(B)

then events A and B must be disjoint and independent.
(Speaking purely about probability...)

That's not correct. Independent means that

P(A and B) = P(A) * P(B)

Since disjoint means P(A and B) = 0, we see that two events can be disjoint and independent if and only if the probability of one of them happening is zero.

Furthermore, disjointness doesn't follow from that requirement: the best we can say is that X, A, and B do not happen simultaneously with nonzero probability... it is possible for A and B to happen simultaneously without X happening.

The implications of that equation are:

P(X and not (A or B)) = 0
P(X and A and B) = 0
 
  • #162
Fra said:
Vanesh, in your view... do you ever attempt to explain where the complex amplitude formalism comes from, or is it just taken to be a fact, and you are trying to interpret it?
For the record...

If, given a quantum state and a measurement operator, you have some means of extracting the "expected value" of the operator... then the Gelfand-Naimark-Segal construction says that you can take the 'square root' of the state, giving you a bra and a ket that represent your quantum state. Such objects live in Hilbert spaces.

This applies to any state -- including statistical mixtures. In the case of a statistical mixtrue, the Hilbert spaces1 produced by the GNS construction has a special form; it is reducible. You can split the Hilbert space into irreducible state spaces (e.g. you can split "particle with unit charge" into "particle with charge +1" and "particle with charge -1"). The state corresponding to a statistical mixture can always be decomposed into its individual parts.

The same cannot be said for pure states; the GNS construction provides you with an irreducible Hilbert space. Thus we see that pure states cannot be reinterpreted as statistical mixtures. (At least, not in any direct way)


1: more precisely, it's a unitary representation of the measurement algebra.
 
Last edited:
  • #163
reilly said:
You are right to point out the inadequacy of my density matrix example.

After a good bit of thought, I still conclude that your convincing argument about conditional probabilities is, in fact incorrect.

It all boils down to the requirement that if

P(X) = P(X|A) P(A) + P(X|B) P(B)

then events A and B must be disjoint and independent. That is, they cannot occur simultaneously, so there is no interference involved in the above probability scheme.

I think you misunderstand me. To address Fra's justified remark, I think that if one is going to say that something has "a probability interpretation", it has to be in one and the same probability space (one and the same event universe with Kolmogorov measure over it). It simply doesn't make sense, IMO, to talk about probabilities in different spaces when talking about something physical (eg. an actual, objective, physical event), because probability just means a description of our ignorance, and grossly justified by a frequentist repetition of the situation.

As such, when I say that the probability that you went through the left door of the building yesterday is 30%, then that means two things: it 1) describes my state of knowledge about that event and 2) it must correspond to a kind of "generic situation" compatible with all I know, that in 30% of the cases you go to the left door (and let's assume, 70% of the time you take the right door).
Of course, after asking you through which door you went, this probability changes, say, from 30% to 100% when you say "I went through the left door" (assuming you're not lying). But that is because my state of knowledge changes (namely, the additional phrase "I went through the left door" is added), so the set of generic events must now be that set where you also utter that phrase, and in THAT set, you ALWAYS go through the left door.

But all these events are part of one and the same probability space. The only thing that changes is the "generic subspace of events compatible with my knowledge", which is expressed in probability language with conditional probabilities.

If you have different probability spaces, you can always combine them in one bigger probability space, by adding a generic tag to each one. If Omega1 is a probability space, with probability measure p1 over it, and Omega2 is another probability space with probabiliity measure p2 over it, then it is possible to construct a new probability space,

Omega = Omega1 x {"1"} union Omega2x{"2"}
(I added the tags 1 and 2).

The probability measure p over Omega can be of different kinds, but we can, for instance, define p(x) = a p1 if x in omega1 and b p2 if x in omega2 ; when a + b = 1, then p will be again a probability measure. P(omega1) = a, and P(omega2) = b.

As such, we can recover the old measures as: p1(x) = P(x | x in omega1) and p2(x) = P(x | x in omega2).

It is in this sense that I understand that we work in ONE SINGLE probability space, and that it is in THIS unique global space that we cannot interpret the wavefunction, when it is at the height of the slits, as giving a probability density.

Indeed, let us assume omega1 as the universe of events of measured impacts on the screen. p1 is then nothing else but the wavefunction squared of the interference pattern. Right. But now we want to interpret the wavefunction at the height of the slits ALSO as a probability generating function. This would be then our Omega2 which consists of 2 events, namely "through slit 1" and "through slit 2", and p2 would then be 50 - 50.

It is the building of an overall probability space as I showed above, that doesn't work in this case. THIS is what I mean that you cannot see the wavefunction with a probability interpretation when it is "in between measurements".

In the quantum situation, the situation is very different. For all practical purposes there are three independent( well almost) distinct events for the two slit expt., and they are the initial experimental conditions, slit A open, or slit B open, or slits A and B open. Properly normalized and orthogonalized, the QM measurement pattern will be reproduced as the sum of conditional QM probs based on the three slit states.

This only works when considering only as events the "preparation" and the "measurement" events, and NOT when considering the "in between" states as EVENTS (necessary to give them a probability interpretation).

I agree of course with you that P(measured at x | slit situation 1) will give you the right answers. But we only consider "omega" here as consisting of "preparation" events and "measurement" events. This is fully in line with the Copenhagen view that you *cannot talk about what happens in between". And that "cannot talk" is not only about IGNORANCE, but simply about non-talkability! If it were *ignorance* then it could be seen as events in a probability space and that's what is impossible.

If the wavefunction at the height of the slit is to have an *ignorance* or probability interpretation - instead of NOT having an interpretation such as in Copenhagen - then it is to be seen as giving us a probability for "going through slit 1" and "going through slit 2".

And once we do that, we have completely "forgotten" the conditional probabilities of the experiment setup, simply because the wavefunction at the slits contains all "information" for going further.

Let us consider the following events: A = slit 1 open, B = slit 2 open, C = slit 1 & 2 open.
X = particle through slit 1, Y = particle through slit 2
U(x) impact at position x on screen

P(U(x) | A ) gives us a blob behind 1, P(U(x) | B) gives us a blob behind 2, and P(U(x) | C) gives us an interference pattern. That's what you get out of quantum mechanics, and out of Copenhagen. X and Y don't exist in the event space of Copenhagen.

But if we insist on having X and Y in there (probability interpretation of the wavefunction all the time), then the situation is the following:

P(U(x) |A and X) = P(U(x) | X). Indeed, if we KNOW that slit A was open, AND we know that the particle went through slit 1, then we can limit ourselves to just knowing that the particle went through slit 1.
(A and Y) = 0 so no conditional prob can be defined here
(B and X) = 0 idem
P(U(x) | B and Y) = P(U(x) | Y)

We know this also from QM: if we evolve the state "at slit 1" onto the screen, we will find the blob behind 1.

And now comes the crux:
P(U(x) |(C and X)) = P(U(x) |X). Indeed, IF WE KNOW that the particle went through slit 1, even if we opened slits 1 and 2, (for instance by measuring it at the slit), then we find simply the blob behind 1. In other words, knowing that the particle went through slit 1 is sufficient, and the "extra information" that both the slits are open doesn't change anything on the conditional probability.

And now we're home:

P(U(x) | (C and X) ) = P(U(x) | X ) = P(U(x) | (A and X))

In other words, when we count X and Y as events, and we take them conditionally, we don't care what were the preparations.

If that's true, thenP(X|C) * P(U(x) | (C and X)) + P(Y|C) P(U(x) | (C and Y) ) should be P(U(x) | C) and it isn't, because P(U(x) | (C and X) ) = P(U(x) | X ) = P(U(x) | A) and in the same way P(U(x) | (C and Y) ) = P(U(x) | B). P(X|C) = P(Y|C) = 0.5 (under the probability interpretation of the wavefunction)

so we should have: 0.5 P(U(x) | A ) + 0.5 P(U(x) | B) = P(U(x) |C), which is not true.
 
  • #164
I agree to a good extent with vanesch here regarding the problem, but perhaps not on the cure.

vanesch said:
I think that if one is going to say that something has "a probability interpretation", it has to be in one and the same probability space (one and the same event universe with Kolmogorov measure over it). It simply doesn't make sense, IMO, to talk about probabilities in different spaces when talking about something physical

Yes. This is why I mumble about that they are not statistically independent spaces, they are related, and this relation also has a physical interpretation. And information in one spaces induces, via relations, information in the connected spaces, until there is equilibrium. Also, new substructures can spontaneously form.

This thinking has similarities to decoherence selection by the environment.

vanesch said:
If you have different probability spaces, you can always combine them in one bigger probability space, by adding a generic tag to each one. If Omega1 is a probability space, with probability measure p1 over it, and Omega2 is another probability space with probabiliity measure p2 over it, then it is possible to construct a new probability space,

I sort of agree, so in a sense what I was talking about is that we decompose the "probability spaces", or the microstructure (sometimes a better word), into substructures, that effectively work as standalone spaces, but they are connected to each other, so the decomposition of substructures serves a purpose, like a kind o self organisation, to optimise storage. Clearly the concept of dimension is strongly related to this. Dimensionality is in my thinking, related to splitting of the microstructure. In effect the structure selecetion is "driven" by interaction by the environment.

I think this tangents to what Hurky relates to. That measurements are selected, this is excellent. But there are still some details missing, isn't there? Not to mention gravity and intertia.

I see the foundational issues of QM so closely touching inertia concepts that I think this could explain also gravity. Or as I think, rather just "identify gravity", like we identify other properties.

If you consider information, and the related microstructures one can intuitively imagine intertia in subspaces, so there are a certain stability. The remodelling are unlikely to be chaotic. From my view at least, gravity/spacetime and these foundational issues of QM touch each other. I can't picture a consistent solution to one without the other in the information view.

/Fredrik
 
  • #165
Fra said:
Yes. This is why I mumble about that they are not statistically independent spaces, they are related, and this relation also has a physical interpretation. And information in one spaces induces, via relations, information in the connected spaces, until there is equilibrium. Also, new substructures can spontaneously form.

It's clear that people use different words depending on their approach.

I would say that what I mean to say here, is closely related to the various concepts of symmetry breaking. How come a uniform ignorance spontaneously split and start to form structures? And how does the relations between substructures form? and are certain relation more likely to be formed than others? is there even an answer to this? and what about stability? why doesn't everything just happened all over the place yielding just chaos?

Why is spontanouse structure formation more likely than the opposite?

/Fredrik
 
  • #166
vanesch said:
grossly justified by a frequentist repetition of the situation.

If this is to be realistic, the information capacity of hte observer seemingly limits the confidence level of the probability estimate, or? what's your comment on tihs?

This is something that I try to take very seriously, and resolve.

A consistent frequentists interpretation seems to contradict with limited encoding capacity? If you make use of external memory, then aren't you modifying the condition C?

P(x|C)

Part of the implicit (not always written out) condition is information capacity in my thinking. I am trying to give a physical interpretation of "probabilities", but since they aren'y really unitary in time, the word microstates and microstructures is better.

/Fredrik
 
  • #167
Hurkyl -- You are quite right, except I meant independent in time (Markovian). What happens now is independent of what happened then -- I refer to repetitions of the experiment.
Regards,
Reilly

Hurkyl said:
(Speaking purely about probability...)

That's not correct. Independent means that

P(A and B) = P(A) * P(B)

Since disjoint means P(A and B) = 0, we see that two events can be disjoint and independent if and only if the probability of one of them happening is zero.

Furthermore, disjointness doesn't follow from that requirement: the best we can say is that X, A, and B do not happen simultaneously with nonzero probability... it is possible for A and B to happen simultaneously without X happening.

The implications of that equation are:

P(X and not (A or B)) = 0
P(X and A and B) = 0
 
  • #168
A quick rejoinder, and a question.

RA
In the quantum situation, the situation is very different. For all practical purposes there are three independent( well almost) distinct events for the two slit expt., and they are the initial experimental conditions, slit A open, or slit B open, or slits A and B open. Properly normalized and orthogonalized, the QM measurement pattern will be reproduced as the sum of conditional QM probs based on the three slit states.

vanesch
This only works when considering only as events the "preparation" and the "measurement" events, and NOT when considering the "in between" states as EVENTS (necessary to give them a probability interpretation).

RA
I don't get it. Events are often defined as points in a probability space, as outcomes of some measurement procedure. So, |W(x)|^^2 represents a probability density in configuration space, defined by the eigenvalues and eigenstates of the position operator, x. So, in between the absolute square of the wave function will describe measurements showing interference effects related to those on other planes. And, of course experiment will confirm the validity of theory. What am I missing?

You and Fra talk about different spaces. What are they.
Regards,
Reilly
 
  • #169
reilly said:
I don't get it. Events are often defined as points in a probability space, as outcomes of some measurement procedure. So, |W(x)|^^2 represents a probability density in configuration space, defined by the eigenvalues and eigenstates of the position operator, x. So, in between the absolute square of the wave function will describe measurements showing interference effects related to those on other planes. And, of course experiment will confirm the validity of theory. What am I missing?

By definition, events related to observation can be seen as events in a probability space, and quantum mechanics describes that probability space perfectly. There's no problem with that. However, the wavefunction also "exists" (in whatever sense) *in between* observations. And it is there that one cannot give a probability interpretation to that wavefunction. Copenhagen has no problems with that: the wavefunction is just a tool to calculate the probabilities of measurements, so its meaning *in between* measurements is left undefined - it's mostly seen as a kind of calculational tool, that's all.
If you insist on giving the wavefunction a probabilistic interpretation *always* (so not only at observations, but also in between) you run into the kinds of problems I tried to explain. That's all.

So, OR we have a probabilistic description of events at preparation/measurement and NOTHING in between (except for calculational aids), OR we have something physical there, which is *sometimes* to be seen as probability, and *sometimes* not.

But I agree with you that this is NOT observable, because to be observable, means, to be a measurement, and then we know that we can use probability theory. I'm talking about the wavefunction of a thing *in between* measurements - like we can talk about the EM wave in between emission and detection. I'm trying to say that - in as much as we want to have a physical picture - we have no difficulties taking that there IS an E/B field between the emitter and the receiver, and applying a similar logic, we should accept that there is something physical like a wavefunction in between preparation and measurement, and that we can't see that simply as a kind of probability description.
Copenhagen denies that. It is a position.
 
  • #170
So in the absence of observation is when the wave function exists.
 
  • #171
It's clear that we disagree on somethings and agree on some parts. I'm not in the position to currently make a full explanation of what I mean, beause it's in progress but here are some more comments.

vanesch said:
By definition, events related to observation can be seen as events in a probability space, and quantum mechanics describes that probability space perfectly.

I can't accept that as a definition. It's usually a postulate of QM - I call it an assumption. And the reason I point it out is of course because I disagree with the assumption, except in special cases.

What I suggest is that generall, the decomposition of just observed "raw data" if I may use that word, into a specific probability space seems to be ambigous. I haven't seen a rigorous reasoning that deduces a uniqe structure from the raw data.

Any my point is not that there exists such a deduction, buy what I think is that there exists an induction, that ultimately selects this structure/space. But this is foundamentally "dynamical", and actually relational to the history of data.

It's exactly such an induction I look for. And I think even such an induction itself is not static, it's evolving too. Everything is "sliding" and then how can we get stability and predictability? Here where I think the rescue is intertia and time seen as a parametrisation of this remodelling.

vanesch said:
There's no problem with that. However, the wavefunction also "exists" (in whatever sense) *in between* observations. And it is there that one cannot give a probability interpretation to that wavefunction. Copenhagen has no problems with that: the wavefunction is just a tool to calculate the probabilities of measurements, so its meaning *in between* measurements is left undefined - it's mostly seen as a kind of calculational tool, that's all.

Since I argue for a reconstruction of quantum foundations, I am not yet sure how big modifications of the wavefunction terminology that is to be seen, but if we ignore that details for a second and with wavefunction included what it is today and any possible improvements of it, taking it's effective place, then the way I see it the wavefunction exists and is defined by the microstructure and microstate of an observer. This means that the wavefunction is not fully objective, it's subjective and part of hte observers identity, meaning it's observable to the observer himeself, but not fully to others.

The "wavefunction" can change even without new observations, and that's due to the observers internal "dynamics" that ideally is in harmony with the environment. It's the "expected" projection of the environment. But this expectation is updated on each observation or release of information.

The way I picture the probability-like interpretation in between measurements is by connection the state of the microstructure of the observers to a probability, and thus the evolution of the wavefunction are simply the only consistent "self-evolution" of the observes information. This takes place because typically the information actually contains also information about the dynamics, and uncertainty. Thus, even left unperturbed, this information is bound to be subject of self-evolution to respect itself. This I imagine is also the basis for something like an "internal clock".

I am unable to make this reasoning without connecting to intertia and time. I have been fascinated but this fact, that a consistency analysis of QM alone, seems to point towards a direction that smells strongly of relativity.

/Fredrik
 
  • #172
reilly said:
You and Fra talk about different spaces. What are they.

If you like, perhaps considering them my personal pet ideas is the easiest rescue here. But I've tried to elaborate the ideas.

But the different spaces can be seen as "substructures" of the one full structure.

Lets consider the data perspective, and let's not talk about time.

For example suppose you have 9 samples? What determines wether you have 9 samples from a one dimensional structure, or 3 triples from a 3 dimensional space? The raw data is the same? Or 4 pairs and a mismatched sample?

That's as silly example, but what I mean to say is that the raw data can be sub structured in various ways, but not just this simple way, a more sophisticated thing starts when you have a limited mmeory record. Suppose you can only store 9 samples. Then when you get the 10'th sample you need to make a decisions. How do you incorporate/remodel your current record, to update your information about the new sample, and and the same time get rid of one sample? You obviously need to throw away the least important sample - you need to make a decision. You want to maximise your learning given some constraints.

More advanced options starts when you consider performation a transformation on the whole, or parts of your record, and hten instead store the transformed record? MAYBE that is a way to retain more information with same storate capacity? This is in effect data compression. Think - Fourier transform for an example. Then, what transformation is more efficient? This may, many complicated things can be done and since we continously train and remodel the record, we are no longer talking about equivalent information.

I'm not sure if this makes sense. If not I point out again that most of this is my personal thinking, no need to give anyone headache at least until i am in the position to present an explicit formalism.

But to connect to reilly's original post I connected to, subjective bayesian probability reasoning and inductive reasoning are key tools for me in this.

/Fredrik
 
  • #173
If the idea occurred to anyone I don't think this will generally come out as clean as a markov process. But possibly "almost" like a markov process in some limiting cases.

Similarly I expect _in general_ violation of unitarity, but in many cases an effective unitarity will be emergent, so this will be entirely consistent with the effective unitarity we have experience with from QM.

/Fredrik
 
  • #174
vanesch -- Finally I get it, or I get why we have never quite agreed. Now, I still go with Born, in that, ultimately in between is properly described in terms of quantum probabilities.

First, how do you know that B and E exist in between? Yes, there's enormous circumstantial evidence to support continuity in electromagnetism. But you really don't know until you measure, and the quantum world will cause a few problems of interpretation . Same is true for photons or electrons going through slits; it's the possible measurements, which define the events of probability spaces.

One way, in principle, to show the in-between would be to use muons, or pi naughts, or ..., in a two slit experiment. Then as muons go through the slits, the decay will show the in-between probabilities -- much like Faraday's iron filings in a magnetic field.(The clever experimenter will, of course, fix things so that the bulk of decays occur between the transmission and receptor screens. )

Regards,
Reilly
vanesch said:
By definition, events related to observation can be seen as events in a probability space, and quantum mechanics describes that probability space perfectly. There's no problem with that. However, the wavefunction also "exists" (in whatever sense) *in between* observations. And it is there that one cannot give a probability interpretation to that wavefunction. Copenhagen has no problems with that: the wavefunction is just a tool to calculate the probabilities of measurements, so its meaning *in between* measurements is left undefined - it's mostly seen as a kind of calculational tool, that's all.
If you insist on giving the wavefunction a probabilistic interpretation *always* (so not only at observations, but also in between) you run into the kinds of problems I tried to explain. That's all.

So, OR we have a probabilistic description of events at preparation/measurement and NOTHING in between (except for calculational aids), OR we have something physical there, which is *sometimes* to be seen as probability, and *sometimes* not.

But I agree with you that this is NOT observable, because to be observable, means, to be a measurement, and then we know that we can use probability theory. I'm talking about the wavefunction of a thing *in between* measurements - like we can talk about the EM wave in between emission and detection. I'm trying to say that - in as much as we want to have a physical picture - we have no difficulties taking that there IS an E/B field between the emitter and the receiver, and applying a similar logic, we should accept that there is something physical like a wavefunction in between preparation and measurement, and that we can't see that simply as a kind of probability description.
Copenhagen denies that. It is a position.
 

Similar threads

Back
Top