- #1

- 1,313

- 0

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter Mike2
- Start date

- #1

- 1,313

- 0

- #2

- 223

- 2

[tex]

P(i, t; f, t') = |\langle i | \exp \left (i \int_{t}^{t'} H(\tau) d\tau / \hbar \right ) | f \rangle |^2

[/tex]

If these states aren't energy eigenstates, then in general this could be any number between 0 and 1.

- #3

- 537

- 1

[tex]<\Omega> = Tr{\Omega \rho}[/tex]

where [tex]\rho[/tex], the density matrix, is given by:

[tex]\frac{e^{-\beta \hat{H}}}{Q}[/tex]

- #4

- 1,313

- 0

[tex]

P(i, t; f, t') = |\langle i | \exp \left (i \int_{t}^{t'} H(\tau) d\tau / \hbar \right ) | f \rangle |^2

[/tex]

If these states aren't energy eigenstates, then in general this could be any number between 0 and 1.

Thank you. But my question is about the probabilities involved in sequential measurements. If you make a second measurement, of say a photon, after a first measurement, what is the overall probablility of obtaining a particular value of second measurement? I know it's supposed to be the probability of obtaining the first measured value times the probability of obtaining the second measured value from the state obtained from the first measurement. Are all these probabilities conditional probabilities, joint probabilities, marginal probabilities, what? Thanks.

Last edited:

- #5

- 1,313

- 0

Mike2 said:Thank you. But my question is about the probabilities involved in sequential measurements. If you make a second measurement, of say a photon, after a first measurement, what is the overall probablility of obtaining a particular value of second measurement? I know it's supposed to be the probability of obtaining the first measured value times the probability of obtaining the second measured value from the state obtained from the first measurement. Are all these probabilities conditional probabilities, joint probabilities, marginal probabilities, what? Thanks.

Or in other words, does the probability of measureing a value only depend on the state being measured and not anything else? I understand that there can be simultaneous eigenstates for commuting operators with two different measurements on the same state. Still the overall probability is the multiplication of the individual probabilities of measuring each value of compatible observables. The probability of measuring the second given the first is 1 since the eigenstate for measuring the first is also the state of measuring the second. But what about incompatible/non-commuting observables. Is the probability of measuring the second given the first 0 so that again the overall probability is a multiplication of measuring the first times the probability of measuring the second? Are there any circumstances in which the overall probability of measuring two observables is NOT simply the multiplication of the two measurements whether they be compatible or incompaticle, sequential or simultaneous, degenerate or not degenerate, etc? Thanks.

Last edited:

- #6

vanesch

Staff Emeritus

Science Advisor

Gold Member

- 5,092

- 18

It is in these cases that it is interesting to "let the wavefunction run". You understand that this is MWI-inspired, but in fact, the calculations are independent of any interpretational issue.

If you have an initial state, |psi1> and you do a "preparation" so that you know that it is in eigenstate "start", then we can say that we write:

|psi1> = a |start> + b |not-start>

If we now apply the time evolution U(t), we have |psi2>:

|psi2> = U(t) |psi1> = a U(t) |start> + b U(t) |not-start>

And if we measure state |end1> or state |end2> then we have to write:

|psi2> = c |end1>|stuff> + d |end2> |otherstuff>

Now, the fact that there was recorded information about the preparation (*), even after having applied the second measurement, means that in U(t) |start> there is a piece that "remembers" the start condition.

So we can write

|psi2> = u|end1>|fromstart> + v|end2>|fromstart> + w|end1>|fromnotstart> + x |end2> |fromnotstart>

We see that |c|^2 = |u|^2 + |w|^2

and |d|^2 = |v|^2 + |x|^2

We also have that |a|^2 = |u|^2 + |v|^2

and |b|^2 = |w|^2 + |x|^2

As such, the joint probability to have "start" AND "end1" is given by |u|^2.

The probability to come from "start" is |a|^2

The conditional probability to have "end1" IF we come from "start" is then given by:

P(end1 | fromstart) = |u|^2 / |a|^2 = |u|^2 / (|u|^2 + |v|^2)

On the other hand, if we had "projected" after the preparation, then we would say that we had only a|start>, with norm |a|^2, and after time evolution we would end up with:

U(t) a |start> = (u|end1> + v|end2>)|fromstart>

We would then find the component with |end1> (we would probably not bother with the common |fromstart> part) which has norm squared |u|^2.

However, we would in fact have normalized the "starting" vector (divide by |a|):

U(t) |start> = 1/|a| (u|end1> + v|end2>)|fromstart>

and then the component with |end1> would have norm: |u|^2/|a|^2.

This is nothing else but the conditional probability calculated earlier.

(*) This is important. It means that there has not been any "quantum erasure" thing, and that the state preparation has been irreversibly recorded, independent of whatever next measurement was going to be done.

For instance, in a 2-slit experiment, if we "prepare" by closing one of the slits, then a state that would have been "through the closed slit" will never ever interfere with the state "through the open slit", because the first one will be reflected back, or will have heated up some material, and will have left an essentially irreversible recording. Even after whatever measurement we plan to do, we will STILL know which slit has been closed.

This is essential to be able to say that the |fromstart> and the |notfromstart> states are essentially orthogonal.

If you have an initial state, |psi1> and you do a "preparation" so that you know that it is in eigenstate "start", then we can say that we write:

|psi1> = a |start> + b |not-start>

If we now apply the time evolution U(t), we have |psi2>:

|psi2> = U(t) |psi1> = a U(t) |start> + b U(t) |not-start>

And if we measure state |end1> or state |end2> then we have to write:

|psi2> = c |end1>|stuff> + d |end2> |otherstuff>

Now, the fact that there was recorded information about the preparation (*), even after having applied the second measurement, means that in U(t) |start> there is a piece that "remembers" the start condition.

So we can write

|psi2> = u|end1>|fromstart> + v|end2>|fromstart> + w|end1>|fromnotstart> + x |end2> |fromnotstart>

We see that |c|^2 = |u|^2 + |w|^2

and |d|^2 = |v|^2 + |x|^2

We also have that |a|^2 = |u|^2 + |v|^2

and |b|^2 = |w|^2 + |x|^2

As such, the joint probability to have "start" AND "end1" is given by |u|^2.

The probability to come from "start" is |a|^2

The conditional probability to have "end1" IF we come from "start" is then given by:

P(end1 | fromstart) = |u|^2 / |a|^2 = |u|^2 / (|u|^2 + |v|^2)

On the other hand, if we had "projected" after the preparation, then we would say that we had only a|start>, with norm |a|^2, and after time evolution we would end up with:

U(t) a |start> = (u|end1> + v|end2>)|fromstart>

We would then find the component with |end1> (we would probably not bother with the common |fromstart> part) which has norm squared |u|^2.

However, we would in fact have normalized the "starting" vector (divide by |a|):

U(t) |start> = 1/|a| (u|end1> + v|end2>)|fromstart>

and then the component with |end1> would have norm: |u|^2/|a|^2.

This is nothing else but the conditional probability calculated earlier.

(*) This is important. It means that there has not been any "quantum erasure" thing, and that the state preparation has been irreversibly recorded, independent of whatever next measurement was going to be done.

For instance, in a 2-slit experiment, if we "prepare" by closing one of the slits, then a state that would have been "through the closed slit" will never ever interfere with the state "through the open slit", because the first one will be reflected back, or will have heated up some material, and will have left an essentially irreversible recording. Even after whatever measurement we plan to do, we will STILL know which slit has been closed.

This is essential to be able to say that the |fromstart> and the |notfromstart> states are essentially orthogonal.

Last edited:

- #7

- 1,313

- 0

It is in these cases that it is interesting to "let the wavefunction run". You understand that this is MWI-inspired, but in fact, the calculations are independent of any interpretational issue....

Sorry, you lost me here, that is, if you were even addressing my question. So are you saying that there IS a possibility that the overall probability after making two measurement is NOT the multiplication of obtaining each measurement individually?

I'm thinking in terms of a sample space of all possible states with a probability associated with each state like elements in a probability space. Then one state (with a given probability) moves to another state (with a different probability) with a probability dependent only on the state it moved to and the state it came from, and not on any state prior to the state it came from. Then the probability of any two moves in the sample space (two measurements) is equal to the multiplication of the probability of moving from the first to the second state (first measurement) times the probability of moving from the second to the third state (the second measurement). I can't think of any exceptions, can you?

Then again I might be confusing the probability associated with a move (a measurement) in the sample space (of all possible states) with the probability associated with the "position" of each sample (within the space of all possible states). Is the probability associated with a move (measurement) the same as the conditional probability associated with one "position" times the probability of next "position"? Or is a "move" (measurement) simply the probability out of all possibilities (in the sample space) of there being both the initial state and the measured final state? This would probably be divided by the probability of the initial state since we may not know the probabilities of putting the system in the initial state. I hope this is understandable. I'm trying to understand measurement probabilities in terms of the probability of the occurance of a state out of ALL the possible states that can occur . Any help is appreciated.

Last edited:

- #8

vanesch

Staff Emeritus

Science Advisor

Gold Member

- 5,092

- 18

Sorry, you lost me here, that is, if you were even addressing my question. So are you saying that there IS a possibility that the overall probability after making two measurement is NOT the multiplication of obtaining each measurement individually?

Exactly. That's the big thing in quantum theory. The first measurement is a physical interaction, and has ALTERED the wavefunction in such a way that the probability distributions that will be spit out at a later moment (given that the previous measurement took place) will be different than what they would have been if the first measurement didn't take place.

In other words, the joint probability for two successive measurements is not equal (in general) to the product of the probability of the first measurement AND the probability of the second measurement "alone", meaning, without having done the first measurement. The first measurement (no matter what its outcome) has changed the situation for the second measurement.

This is why you cannot introduce a general Kolmogorov probability distribution for all thinkable measurements in quantum theory (because otherwise, what you say would be a consequence of these axioms).

However, what IS true of course, is for two given measurements: the joint probability of getting result A for the first and result B for the second equals the probability of getting result A for the first (is independent of what will happen later), together with the conditional probability of getting result B for the second *if we know* that the result was A for the first. That is what I tried to show in my previous post. We can also a posteriori "reconstruct" the probability distribution of result B, weighted over all possible results of A. But the thing is that this probability distribution of result B, weighted over all possible outcomes of A, is in general NOT equal to the probability of result B, if the measurement of A didn't take place!

You can see this in the double slit experiment:

Consider you measure through which slit the particle goes (that is result A). Measure the impact position (result B). Well, if you take as a condition that the result A was "left slit", you will get a bump for B, slightly to the left. Similar for the result A' (right slit). You'll get a bump for B, slightly to the right. The weighted result of "B alone" (but when A has been performed), is simply the sum of two bumps.

However, if you DO NOT measure A, then you get an interference pattern for B.

Or maybe I misunderstand you ?

- #9

- 1,313

- 0

Exactly. That's the big thing in quantum theory. The first measurement is a physical interaction, and has ALTERED the wavefunction in such a way that the probability distributions that will be spit out at a later moment (given that the previous measurement took place) will be different than what they would have been if the first measurement didn't take place.

In other words, the joint probability for two successive measurements is not equal (in general) to the product of the probability of the first measurement AND the probability of the second measurement "alone", meaning, without having done the first measurement. The first measurement (no matter what its outcome) has changed the situation for the second measurement.

This is why you cannot introduce a general Kolmogorov probability distribution for all thinkable measurements in quantum theory (because otherwise, what you say would be a consequence of these axioms).

However, what IS true of course, is for two given measurements: the joint probability of getting result A for the first and result B for the second equals the probability of getting result A for the first (is independent of what will happen later), together with the conditional probability of getting result B for the second *if we know* that the result was A for the first. That is what I tried to show in my previous post. We can also a posteriori "reconstruct" the probability distribution of result B, weighted over all possible results of A. But the thing is that this probability distribution of result B, weighted over all possible outcomes of A, is in general NOT equal to the probability of result B, if the measurement of A didn't take place!

...

Or maybe I misunderstand you ?

I think were starting to understand each other. So we do NOT always have that the probability of measuring B from the initial state |initial> is the product of the probability of going from |initial> => |A> multiplied by the probability of going from |A> => |B>, correct? And this is because measuring B from the state |A> is different from measuring B from state |initial>, right?

And you're saying that this is what prevents us from assigning a probability distribution to each state in the space of every imaginatively possible state, right?

Could you tell me a little more about Kolmogorov probability distributions, what was he trying to accomplish, maybe I'm trying to duplicate his efforts and don't know it yet. Thanks.

- #10

vanesch

Staff Emeritus

Science Advisor

Gold Member

- 5,092

- 18

I think were starting to understand each other. So we do NOT always have that the probability of measuring B from the initial state |initial> is the product of the probability of going from |initial> => |A> multiplied by the probability of going from |A> => |B>, correct? And this is because measuring B from the state |A> is different from measuring B from state |initial>, right?

And you're saying that this is what prevents us from assigning a probability distribution to each state in the space of every imaginatively possible state, right?

Exactly. For each state, AND a given measurement, we can of course generate an entirely respectable probability distribution. But BOTH are part of the picture, and there doesn't exist any overall probability distribution which describes all thinkable (but incompatible) measurement outcomes of all measurements together.

Could you tell me a little more about Kolmogorov probability distributions, what was he trying to accomplish, maybe I'm trying to duplicate his efforts and don't know it yet. Thanks.

Kolmogorov axioms of probabilities are simply the basic axioms we want probability distributions to satisfy. It is because people invented other "probability distributions" afterwards but which do not correspond to what can be frequentist probabilities.

About the guy: http://en.wikipedia.org/wiki/Kolmogorov

about Kolmogorov axioms: http://en.wikipedia.org/wiki/Kolmogorov_axioms

So, a Kolmogorov probability distribution is a standard probability distribution as you know it.

Last edited:

- #11

- 1,313

- 0

Exactly. For each state, AND a given measurement, we can of course generate an entirely respectable probability distribution. But BOTH are part of the picture, and there doesn't exist any overall probability distribution which describes all thinkable (but incompatible) measurement outcomes of all measurements together.

On second thought, it seems it would be impossible to determine the probability distribution for the space of every conceiveable state, since we cannot measure every conceiveable state. We can only measure the probability of going from one state to another, and there may be many (infinite?) states between these two that we cannot measure. But the probabilities accumulate by multiplication for each subsequent measurement, right? If we start with |A> and go to |B> with probability B, then if we go from |B> to |C> with probability C, then we go from |C> to |D> with probability D, then the probability from |A> to |D> is B*C*D always, irrespective of degeneracy, or commutativity, or concurancy, right? I think that's all I really need to continue.

For if we consider two states that we know, |A> an initial state, and |B> the result of measurement, these are in conjunction. And a conjunction implies dual implication.

A*B => (A => B)*(B => A) where * is a logical conjucntion and => is implication.

Then if * gives us a number (a probability), then => would have to be the square root of that number (perhaps an amplitude). Do you know of any studies of this nature out there. I would hate to repeat someone's fail efforts. (of course, if they failed, you'd never hear of it, would you )

Last edited:

- #12

- 1,313

- 0

I could use more clarification, agian... I understand that we may not be able to know the probability distribution for each state in the space of all possible states imaginable. That is probably because we don't know the size of that space, so we can't say how likely it is to pick one state out of a total number that we don't know. But I wonder if that precludes the supposition that such a distribution might theoretically exist. For when I consider how probabilities come into the picture to begin with, I have to wonder if the probabilities that do exist for going from one state to another might derive from some theoretical overall distribution. So I wonder if the probability associated with going from an initial state to a measured state might be a conditional probability determined from an overall distribution. Do we actually need to know the overall distribution, for example, to determine a conditional probability? Or can a conditional probability exist without knowing the overall distribution? (If even conditional probability is the right terms). Thanks.Exactly. For each state, AND a given measurement, we can of course generate an entirely respectable probability distribution. But BOTH are part of the picture, and there doesn't exist any overall probability distribution which describes all thinkable (but incompatible) measurement outcomes of all measurements together.

Share: