Quantum Measurement: Exploring CNOT and Bell's Theorem

In summary: I can't tell you! I'm a measuring device!"I don't really understand why it would be impossible to incorporate a measuring device into the quantum state, but I guess that's just the way it is.
  • #1
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,981
26
(Maybe this should go in philosophy? Feel free to move it.)

I'm confused.

Well, that's probably because I haven't really learned QM properly, except for the basics of Quantum Computing.

But still, I'm confused.



Why is measurement modeled as a projection, or a "collapse" of a wavefunction? Wouldn't it make more sense to incorporate the measuring device into the quantum state?


My perspective is probably limited because of how little I know... but when we went over Quantum Computing in class, it struck me that many quantum gates act like measurements. For simplicity, let's look at a CNOT gate specifically:

As a reminder, a CNOT gate is an operation on two bits that acts as follows on the basis states:

[tex]\mathrm{CNOT}|x>\otimes|y> = |x>\otimes|x \oplus y>[/tex]

In particular, if we prepare |y> to be |0>, then we have the following action on the basis states:

[tex]\mathrm{CNOT}|x>\otimes|0> = |x>\otimes|x>[/tex]


This strikes me as looking very much like a measurement -- we took the value of one qubit and "stored" it in another qubit.


The CNOT gate, incidentally, can also be used as a comparator: observe that:

[tex]\mathrm{CNOT}|0>\otimes|0> = |0>\otimes|0>[/tex]
[tex]\mathrm{CNOT}|1>\otimes|1> = |1>\otimes|0>[/tex]

[tex]\mathrm{CNOT}|0>\otimes|1> = |0>\otimes|1>[/tex]
[tex]\mathrm{CNOT}|1>\otimes|0> = |0>\otimes|1>[/tex]

So we take two bits in, and when we apply CNOT, the second bit tells us if they were equal or not.


We can even observe it multiple times, and compare observations. For example, take the following quantum program:

Take the qubit x as input.
Let y = |0>
Let z = |0>
"measure" x by applying CNOT to (x,y)
"measure" x by applying CNOT to (x,z)
"compare" y to z by applying CNOT to (y, z)
Return z

Of course, by this, I mean you start with [itex]|x>\otimes|0>\otimes|0>[/itex], then apply CNOT to the first two, then to the outer two, then to the last two. After applying these operations, the final qubit will always be zero... the two measurements are always the same!


If we model real-world measurements in this way, it seems weirdness goes away. Sure, we'd be doomed to live in a world where Schrödinger's cat is perpetually both dead and alive, but when we compare any observations, they will always be consistent.

Other weirdness goes away too -- I did some scratching and it would appear that there's no problem measuring the spin state in both the x and y directions. Incidentally, what little I've done feels awfully classical in this perspective, which is nice.


I know I have work to do... the next thing I guess is try to really understand Bell's theorem, and show that this model of measurement behaves properly -- I guess I need to find a program that looks like the thought experiment and show that the "right" answer falls out? I have a sinking feeling that it's not going to, though... which means I'm going to be tormented until I understand what's going wrong. :frown:


EDIT: Fixed a mistake in the definition of CNOT
 
Last edited:
Physics news on Phys.org
  • #2
Oh, a quick addendum that looks promising:

I went back to my scratchwork about measuring the spins along both the x and y axes... and I wondered what would be the effect of collapsing that wave function, and the "right" answer pops out:


So, we've done my "measurements" using the CNOT gate, and now we do a "real" measurement of the spin state of the original object around the x-axis.

It turns out that the qubit representing the spin state around the y-axis is now [itex](+/- |0> +/- |1>) / \sqrt{2}[/itex]! Just what it should be!

(Okay, okay, I'm pretty sure I'm off by a phase shift. Sue me :tongue:)
 
  • #3
Hurkyl said:
Why is measurement modeled as a projection, or a "collapse" of a wavefunction? Wouldn't it make more sense to incorporate the measuring device into the quantum state?

My understanding is that it's because measuring devices are usually complex macroscopic objects with many internal degrees of freedom, and we don't really understand (yet) the details of how such systems interact with simpler systems that exhibit what we think of as quantum-mechanical behavior. It basically sweeps all those details under the rug so that we can make practical calculations.

In principle we should be able to treat the system that we're studying, and the measuring instrument that we're studying it with, as a single quantum-mechanical system, but for practical purposes it's simply not possible. I think of it as sort of like the distinction between the classical mechanics of a single particle or other simple system, versus the classical mechanics of a few times [itex]10^{23}[/itex] atoms (i.e. statistical mechanics).
 
  • #4
What bothers me about that explanation is that a detector is specifically designed not to behave "randomly" -- it's supposed to tell you, for example, what the spin of an object is.

And it does this very well: you feed it something in a pure spin up state, and it says "Spin is up!". Similarly you feed it something in a pure spin down state, and it says "Spin is down!"

If you feed it something in the state (|up> + |down>)/√2, what I've read about quantum measurement says that once it goes into the detector, it's "collapsed" into either the up or the down state, and the detector says which...

But it seems to me that what should happen is that feeding it into the detector puts the system in the state (|up>x|detector says up> + |down>x|detector says down>)/√2, regardless of what the detector is actually comprised.

At the moment, I just can't see how it can be possible for it not to be like this. If it acts as I've stated on the basis states, then it should act as I've stated on the mixed states. :frown:

But I'm ignoring the many degrees of freedom... allow me to restate:

We know that the processof detection takes the state:

|up> x |*> x |junk>
to the state
|up> x |up> x |junk'>

Where the first is the particle, the second is the reading on the detector, and the third is the other degrees of freedom we don't really know how to handle. (|*> means we don't care what the actual state of the detector is)

Similarly, it takes
|down> x |*> x |junk>
to
|down> x |down> x |junk''>

So, it absolutely positively must take:

(1/√2) (|up> + |down>) x |*> x |junk>
to
(1/√2) (|up> x |up> x |junk'> + |down> x |down> x |junk''>)


What details could prevent this behavior from happening?!
 
  • #5
Hurkyl said:
Why is measurement modeled as a projection, or a "collapse" of a wavefunction? Wouldn't it make more sense to incorporate the measuring device into the quantum state?

Ha, VERY good question.
In fact, this is the very precise question which divides all interpretations of quantum theory. All those thinking that there is some kind of genuine collapse belong to the "Copenhagen" school, and all those that think (I'm one of them) that the measurement device must be included somehow, and that there is no collapse, belong to the "relative state interpretation" proponents. Of course, there are many variations on the theme. The "relative state" thing is usually more known as the Multiple Worlds Interpretation or the Everett interpretation.

There is however a link between both, which is called Decoherence theory. It shows you that, once a quantum system interacts with the "environment" (and this is extremely fast !), for all practical purposes, what you will try to obtain with a relative state interpretation will come down to simply applying the "collapse of the wavefunction".
Not all problems are solved (I'm even convinced that some are unsurmountable, which makes me not a "true MWI"), but those problems are very remote from all thinkable experimental results in the near (say, 200 years :-) future.

So for all practical purposes, *it doesn't matter* whether you "collapse the wavefunction" or not. The reason being that from the moment you have a measurement apparatus that is "big enough" to allow you to read it, it has also interacted with the environment, which, through decoherence theory, shows that you will obtain identical results as if you DIDN'T take into account the quantum nature of the device, but "collapsed" the wavefunction of the apparatus.

cheers,
Patrick.
 
  • #6
vanesch said:
Ha, VERY good question.
In fact, this is the very precise question which divides all interpretations of quantum theory. All those thinking that there is some kind of genuine collapse belong to the "Copenhagen" school, and all those that think (I'm one of them) that the measurement device must be included somehow, and that there is no collapse, belong to the "relative state interpretation" proponents. Of course, there are many variations on the theme. The "relative state" thing is usually more known as the Multiple Worlds Interpretation or the Everett interpretation.

There is however a link between both, which is called Decoherence theory. It shows you that, once a quantum system interacts with the "environment" (and this is extremely fast !), for all practical purposes, what you will try to obtain with a relative state interpretation will come down to simply applying the "collapse of the wavefunction".
Not all problems are solved (I'm even convinced that some are unsurmountable, which makes me not a "true MWI"), but those problems are very remote from all thinkable experimental results in the near (say, 200 years :-) future.

So for all practical purposes, *it doesn't matter* whether you "collapse the wavefunction" or not. The reason being that from the moment you have a measurement apparatus that is "big enough" to allow you to read it, it has also interacted with the environment, which, through decoherence theory, shows that you will obtain identical results as if you DIDN'T take into account the quantum nature of the device, but "collapsed" the wavefunction of the apparatus.

cheers,
Patrick.

Decoherence solves all the problems I can see. So don't leave me hanging. What problems does it not solve?

I looked at your journal and noticed you claim a problem with MWI when the probabilities are not equal to 0.5. But I'm not a physicist and have problems following why. could you make trhat more clear?
 
  • #7
It doesn't solve all probelms, as whilst it explains why when we make a measuremt our measurment only corresponds to one of the eignevalues of the measuremnt operator rather than all or sevral of the eignevalues of the measurment operetor, what it doesn't explain is why it correpsonds to a partciular eignevalue i.e. the measurment appartus-partcile system (of course you can consider an enevn laregr enviourment, but the problem's still there) is still in a superpostion of states correpsonding to all the possible measurments obtained, it's just that all the states corresponding to measuremnts that show the particle in a superpotion of states vanish. This is why people who love the many worlds interpretation love decoherence as in the context of the MWI it's not troubling at all that the wavefunction of the system as a whole hasn't collapsed.
 
Last edited:
  • #8
ppnl2 said:
Decoherence solves all the problems I can see. So don't leave me hanging. What problems does it not solve?

The problem that decoherence solves is essentially the "preferred basis" problem. A problem which is posed with the "collapse" is: IN WHAT BASIS do we "collapse". For instance, suppose we "measure position". So we make a device that will entangle with the |x> states of the particle. But then you can easily find out that superpositions of my measurement device entangle with the |p> states of the particle. So why did I call that device a "position measurement device" ? Why didn't I call it a "momentum measurement device" and considered its other states ? You can easily convince yourself, with a toy example of 3 or 4 states, that you can write it in any way you want.

This is where decoherence comes in. Indeed, the interactions with environment are such, that the "pointer states" (these are the states of a measurement device which are close to our usual "classical states") have a preferred meaning: they are the basis that appears in the Schmidt decomposition of the system(+apparatus) and the environment. THIS is the essential content of "decoherence". It means that in the "wavefunction of the universe" you have a tidy superposition of "rest of the universe states" x "pointer states" x "system states which are eigenfunctions of the measurement".

So that problem is essentially solved.

What is NOT solved, is why we have now to pick out ONE of these terms, with the probability of its complex coefficient, squared.
I know that many texts on decoherence say they do solve it, using the density matrix. However, the definition of the density matrix already USES the born rule ! So that's circular reasoning. Zeh points this out in his book. You cannot use the projection postulate to prove that you obtain the projection postulate.

cheers,
Patrick.
 
  • #9
Hurkyl said:
What bothers me about that explanation is that a detector is specifically designed not to behave "randomly" -- it's supposed to tell you, for example, what the spin of an object is.

And it does this very well: you feed it something in a pure spin up state, and it says "Spin is up!". Similarly you feed it something in a pure spin down state, and it says "Spin is down!"

I think you have fallen into the probability problem of interpreting what is an event and what is an outcome (in other words the connection between experimental results and the probability laws, how to measure it). Please, note that probability (Kolgomorov or QM) does not explain how we get an outcome in an experiment, but how to connect the “independent” “single” outcomes (the logical assertion “the outcome a of A is true”) of an experimental trial (one result) to the probability law of events (the statistics) on a probability space.
In this context, an abstract measurement apparatus just allows one to define the events (i.e. the logical assertion A=a is true <=> not (A=a) is false) and the single outcomes in an experimental trial and not why we have single outcomes or how to build “real” abstract measurement apparatuses. In other words, it is a choice (or a restriction) of description: we use functional statements rather than relational statements when we express experimental results.

In this context, the collapse postulate is only an update of the probability law of all the possible observables (the collapse of the state), knowing that the event A=a is true (i.e. the system state is |a> when the event A=a is true). It allows one to make logical assertions concerning the probability law of observables (somewhat analogue to the conditional probability law). It does not tell how to implement the abstract measurement apparatuses (how we get an access to the commuting and non-commuting observables).
One of the jobs of physicists is the interpretation of these different theory statements and how we make real “abstract measurement apparatuses”. Vanesh post is an example of what can be done ; ). However, interpretation, by itself does not change the formal content of the theory.


I hope this can help.

Seratend.


jtbell said:
My understanding is that it's because measuring devices are usually complex macroscopic objects with many internal degrees of freedom, and we don't really understand (yet) the details of how such systems interact with simpler systems that exhibit what we think of as quantum-mechanical behavior. It basically sweeps all those details under the rug so that we can make practical calculations.

This is the usual answer of some QM teachers ; ). They do not want to say they do not have an “explanation” (or interpretation). They try to use the infinity as an escape. I prefer to believe in formal results such as decoherence and the strong/weak law of large numbers.

Seratend.
 
  • #10
jcsd said:
what it doesn't explain is why it correpsonds to a partciular eignevalue .

And it won't be able to as long as we accept that QM theory results deal only with the probability law of events and not outcomes. Somewhere we need the external assumption "A=a is true" (or state is |a>) to define the probability law of a system (even in the case P=100%).

Seratend.
 
  • #11
vanesch said:
What is NOT solved, is why we have now to pick out ONE of these terms, with the probability of its complex coefficient, squared.
I know that many texts on decoherence say they do solve it, using the density matrix. However, the definition of the density matrix already USES the born rule ! So that's circular reasoning. Zeh points this out in his book. You cannot use the projection postulate to prove that you obtain the projection postulate.

cheers,
Patrick.

But why is this anything more than another inappropriate "why?" question? For example you could have asked Newton "why" F=M1XM2/R^2. And I think Newton did obsses over philosophical issues with action at a distance. But science does not answer "why?" questions very well. It goes for "what?" questions.

So is the unsolved problem purely philosophical?
 
  • #12
ppnl2 said:
But why is this anything more than another inappropriate "why?" question? For example you could have asked Newton "why" F=M1XM2/R^2. And I think Newton did obsses over philosophical issues with action at a distance. But science does not answer "why?" questions very well.
So is the unsolved problem purely philosophical?

Almost. Certain experiments have shown (delayed choice quantum erasure) that one sometimes has to be careful with using the projection postulate too early. You can *think* that you have done a measurement (but in fact you simply entangled one system with another), and then you can "erase" that measurement (make it in principle impossible to extract the information), which suddenly allows you to "resurrect" the collapsed wavefunction.
In fact, you should have continued the unitary evolution a bit further, and then all mystery disappears, but these are indeed strange experiments, pulling these questions slightly out of the "purely philosophical" realm.

Another point is that this "collapse of the wavefunction" stuff doesn't respect explicit lorentz invariance.

In general, the point where you have to apply the collapse (what constitutes a measurement) is in fact completely ill defined, no matter how you turn quantum theory. It gives in general people who think a lot about it, the feeling that something, somewhere, is slightly fishy and contains maybe a hint to the "holy grail" (unification of gravity and quantum theory). This is what is different in Newton's ponderings: the definitions were crystal-clear. Here, we have a very fuzzy zone: what constitutes a measurement ? It is simply not defined.

But for the working physicist, everything is "all right", because this problem of definition is SO remote from what is experimentally accessible, that it won't matter for a very long time to come.

cheers,
patrick.
 
  • #13
vanesch said:
Almost. Certain experiments have shown (delayed choice quantum erasure) that one sometimes has to be careful with using the projection postulate too early. You can *think* that you have done a measurement (but in fact you simply entangled one system with another), and then you can "erase" that measurement (make it in principle impossible to extract the information), which suddenly allows you to "resurrect" the collapsed wavefunction.
In fact, you should have continued the unitary evolution a bit further, and then all mystery disappears, but these are indeed strange experiments, pulling these questions slightly out of the "purely philosophical" realm.

I don't see this as a problem. You can reverse the measurement only because the results of the measurement never interacted with the universe at large. You don't get decoherence until the measurement interacts with you and the rest of the universe. You don't get wave collapse until you get decoherence. Or maybe we should say there is no such thing as wave collapse, only decoherence. A measureing device can just be defined as a device for produceing rapid decoherence.


Another point is that this "collapse of the wavefunction" stuff doesn't respect explicit lorentz invariance.

I'm not sure I understand what you mean here. I think I understand that QM allows different observers to have a very different view of the world. For example the cat in the box has a very different take than an observer outside that must see the cat in superposition. But isn't this what MWI solves? Different observers with irreconcilable views of reality are banished to different branches. Now I'm not a fan of explicit MWI because I'm not sure what sense it makes to claim these other branches "exist" if you can't get there from here. But as a way to wrap your mind around the fact that QM allows not contradictions it is useful.


In general, the point where you have to apply the collapse (what constitutes a measurement) is in fact completely ill defined, no matter how you turn quantum theory.

How about if you put collapse at the point where you yourself become entangled with the measurement? It makes for a weird universe but I see no possibility for an actual contradiction.
 
  • #14
ppnl2 said:
I don't see this as a problem. You can reverse the measurement only because the results of the measurement never interacted with the universe at large. You don't get decoherence until the measurement interacts with you and the rest of the universe. You don't get wave collapse until you get decoherence. Or maybe we should say there is no such thing as wave collapse, only decoherence. A measureing device can just be defined as a device for produceing rapid decoherence.

Yes, I agree fully.

I'm not sure I understand what you mean here. I think I understand that QM allows different observers to have a very different view of the world. For example the cat in the box has a very different take than an observer outside that must see the cat in superposition. But isn't this what MWI solves?

Although that's my view too (exactly !), it cannot be said that there is a consensus on it.

Different observers with irreconcilable views of reality are banished to different branches. Now I'm not a fan of explicit MWI because I'm not sure what sense it makes to claim these other branches "exist" if you can't get there from here. But as a way to wrap your mind around the fact that QM allows not contradictions it is useful.

Now that's EXACTLY the view I've been trying to defend here :-) I can tell you most people don't see it that way.


How about if you put collapse at the point where you yourself become entangled with the measurement? It makes for a weird universe but I see no possibility for an actual contradiction.

Indeed, that's also why I endorse that view. I don't see how you can make any sense out of the current formalism otherwise. But many people don't accept it. Note that somehow this is a kind of solipsist universe you (and I) describe here if you think about it, in that quantum theory tells you only what YOU as a single individual, is going to observe, or because you "choose a branch" to live in, or because you are the only real observer around, and you collapse the entire wavefunction of the universe.

cheers,
Patrick.
 
  • #15
This strongly reminds me of concepts in mathematics called "internal" and "external". The best way to describe is an example:

There's a theorem called Skolem's Paradox that says there is a countable model of the real numbers. The paradox is, of course, that we know the set of reals is uncountable, so how can there possibly be a model of the reals that is countable?

The resolution works roughly as follows: there are two mathematical "worlds" at play here. One is the theory of the real numbers, and the other is set theory (other things would work here too). In the set theory world, we can build a model of the reals -- a collection of objects that, with a suitable interpretation, obey the rules of the theory of real numbers.

(This is roughly similar to building a model of a physical theory too!)

So what happens is that Skolem's Paradox is an external statement -- it's a statement made in the world of set theory, and it says that there is a model in which we can find a (set theoretic!) bijection between the model's naturals and the model's reals.

Cantor's theorem, however, is an internal[/i] statement -- within the world of real numbers, it is impossible for there to exist a bijection between the natural numbers and the real numbers.

So the resolution is that the bijection of Skolem's paradox is an external bijection -- it's not part of the world of real numbers, and thus we don't have a contradiction.


I'm reminded of this because that's what I perceive is going on here with measurement: people seem to be formulating their questions externally. We have this model of the universe, and we're sitting in our analytical world asking questions about the things the model spits out.

I conjecture that interpretational issues go away, or are at least greatly simplified, if you instead formulate your questions internally -- try to ask the questions inside the quantum mechanical model.


I think the reason I honed in on the particular Quantum Computational example I did originally is precisely because, to me, it seemed to tidy up the Many Worlds Interpretation. Forking reality is clearly an external viewpoint. :smile: And from this external viewpoint, we do have that different observers have irreconcilable views of reality, but they can never talk to each other...

But, like Skolem's paradox, the nature changes drastically when you ask your question internally -- when two observers walk up to each other and compare notes, they will always find that they agree with each other.



I was trying to work out the overall picture for a Bell test done in my interpretation, and it yields a different interpretation of a collapsing wavefunction. And, if this approach works, it happens to solve another thing that used to irritate me -- the physical meaning of probabilitiy.

The usual meaning of a probability, as I understand it, is that the probability of an outcome to an experiment is the limit of a particular observed quantity as the number of repetitions of the experiment goes to infinity. However, it's always felt mildly circular to me -- classically, if you repeated the experiment exactly, you get the same answer. The probabilities are supposed to relate to the uncertainties we had about the setup... but then we would have already had to know about the probability distribution on those uncertainties! et cetera. And, of course, the central limit theorem makes it virtually impossible to detect any mistakes in this setup.

I propose a different meaning to probability in QM -- the probability of an outcome is defined to be the square of is coefficient in the state vector. (assuming orthogonal decomposition)

In this interpretation, "wavefunction collapse" and "conditional probability" appear to be roughly synonymous -- in other words, the "right" point to collapse the wavefunction is at the conditional clause in whatever conditional probability you wanted to compute.
 
  • #16
Hurkyl said:
I propose a different meaning to probability in QM -- the probability of an outcome is defined to be the square of is coefficient in the state vector. (assuming orthogonal decomposition)

In this interpretation, "wavefunction collapse" and "conditional probability" appear to be roughly synonymous -- in other words, the "right" point to collapse the wavefunction is at the conditional clause in whatever conditional probability you wanted to compute.

I think our views are very close, as far as I think I understand what you mean.

cheers,
Patrick.
 
  • #17
Oh, this is cool, things are making sense.

(I apologize in advance because I expect I'm posting obvious stuff... I'm just giddy! :rofl:)


Going back to the observable effect of a measurement collapsing a wavefunction... if we prepare a particle with spin up along the z axis, then measure it along the x axis, then measure it along the z axis again, because of the collapse we observe spin up and spin down along the z axis equally likely.

However, if we don't look at it in terms of collapse...

If we start with the state |z+>... (suppressing the state of the detectors because it's irrelevant right now)

Then we detect it along the x-axis. Letting the second component be the detector's readout, detection causes the transformation:

|z+> = (|x+> + |x->)/√2
becomes
(|x+, +> + |x-, ->)/√2

Now, we detect along the z-axis, so:

(|x+, +> + |x-, ->)/√2
= (|z+, +> + |z-, +> + |z+, -> - |z-, ->)/2
becomes
(|z+, +, +> + |z-, +, -> + |z+, -, +> - |z-, -, ->)/2

Or, just looking at the detector states:

(|++> + |+-> + |-+> - |-->) / 2


Of course, if we did the detection the other way around, we get:

(|++> + |+->) / √2

(Where the first is the detection along the z-axis, and the second along the x-axis)


So, this tells us what we already know -- detection isn't commutative. But now it means something to me. :smile: And, the explanation to why the first experiment can detect a spin down along the z-axis doesn't resort to wavefunction collapse or the HUP or anything.
 
Last edited:
  • #18
Hurkyl said:
And, the explanation to why the first experiment can detect a spin down along the z-axis doesn't resort to wavefunction collapse or the HUP or anything.

INDEED. And if you analyse this way several experiments, such as EPR, or "delayed quantum erasure" experiments, I find that this makes things MUCH clearer. The "problem" is that you sometimes have to put Bob or Alice in an entangled state, which people find "strange". But in general, working this way takes away a lot of confusion. That's at least my feeling about it.

cheers,
Patrick.
 
  • #19
vanesch said:
INDEED. And if you analyse this way several experiments, such as EPR, or "delayed quantum erasure" experiments, I find that this makes things MUCH clearer. The "problem" is that you sometimes have to put Bob or Alice in an entangled state, which people find "strange". But in general, working this way takes away a lot of confusion. That's at least my feeling about it.

cheers,
Patrick.

I feel the same way. People sometimes exert great effort to bury their heads in the sand so that they can pretend that something that they know happens, but that makes them feel strange, doesn't happen. That's only human nature. (And me being human, I know that I am susceptible to doing it too!)

In this case I'm talking about the fact that from Alice'd point of view, Bob MUST be in a superposition, at least for the amount of time it takes light to travel from Bob to Alice.

David
 
  • #20
Hurkyl said:
The usual meaning of a probability, as I understand it, is that the probability of an outcome to an experiment is the limit of a particular observed quantity as the number of repetitions of the experiment goes to infinity. However, it's always felt mildly circular to me -- classically, if you repeated the experiment exactly, you get the same answer.

The problem with the frequentist interpretation of probabilities is in its domain of validity: an experimental realisation may belong to the set of null probability. And a set of null probability may be dense in the set of all the results. So what is the most important in deciding the occurence of such an outcome: the measure of the set or the distance to other points of the set?

Therefore, in the absolute, it is difficult to say if an experimental trial gives the "real" frequency sequence of the probability law or something else. In fine, it is always a choice: we attach formally a probability law to an experiment trial (by computing the frequency of the outcomes sequence).
 
  • #21
seratend said:
The problem with the frequentist interpretation of probabilities is in its domain of validity: an experimental realisation may belong to the set of null probability.

This is only the case in (over)idealized experiments, where we measure "continuous" variables. But we never do that ! The outcome of an experiment is ALWAYS one of a finite number of discrete possibilities, and there, you cannot have that an experimental realisation belongs to a set of null probability.
Try to give me ONE example where we have done a measurement in such a way that we didn't have a finite number of possible outcomes...

cheers,
Patrick.
 
  • #22
vanesch said:
This is only the case in (over)idealized experiments, where we measure "continuous" variables. But we never do that ! The outcome of an experiment is ALWAYS one of a finite number of discrete possibilities, and there, you cannot have that an experimental realisation belongs to a set of null probability.
Try to give me ONE example where we have done a measurement in such a way that we didn't have a finite number of possible outcomes...

cheers,
Patrick.

No, it is true for all the experimental trials (except for the one valued outcome experiment <=> P=100%).
Just take the coin toss experiment (2 outcome values). In order to verify the probability law of head and tails, you make an experimental trial of n independant trials. This experimental trial is an outcome (a point) of the probability space {H,T}^n.
When n goes to infinity (to calculate the probability law), you just obtain a non-countable set. In other words, the infinite sequence of results, needed to compute the frequency, is an outcome in this non-countable set.
If you develop the probability and the sigma algebra on this space, you just discover that you have sets of null probability. This is the simple application of the strong law of large numbers when we compute the frequency: the result you get is an almost sure result on a set that have subsets of null probability.
If you prefer the outcome (H,H,H,H ...) belongs to this probability space and belongs to a set of null measure for the unbiased coin (P=50%H/50%T). In theory, an unbiased coin toss experiment trial (of infinite trials) may produce this result.

Most of the people always forget that many probability results are "almost sure" results (convergence in law and not in norm). They also forget that the minimum space of the experimental trial outcomes to compute the frequency (the probability) is uncountable (if we have more than one value for the outcomes). In other words we may compute a wrong law (“almost impossible”, but possible).

Seratend.

P.S. Note this result also works for QM (probability of measurment outcomes).
 
Last edited:
  • #23
seratend said:
When n goes to infinity (to calculate the probability law), you just obtain a non-countable set. In other words, the infinite sequence of results, needed to compute the frequency, is an outcome in this non-countable set.

You can't repeat an experiment an infinite number of times !
So that's what I wanted to say: any REAL executable experiment, or any real, executable series of experiments, always draws out of a finite set. The rest is (over) idealisation, which can be a useful mathematical shortcut, but you never, even in principle, can conduct an experiment (considered as a series of experiments) which doesn't reduce to a finite set of possibilities with finite probabilities.
Again, that shouldn't stop us from doing gedanken experiments where we can take n to infinity, but one shouldn't have problems with problems that appear ONLY in gedanken experiments and which are by principle not executable, no ?

cheers,
Patrick.
 
  • #24
vanesch said:
You can't repeat an experiment an infinite number of times !
So that's what I wanted to say: any REAL executable experiment, or any real, executable series of experiments, always draws out of a finite set.
...
Again, that shouldn't stop us from doing gedanken experiments where we can take n to infinity, but one shouldn't have problems with problems that appear ONLY in gedanken experiments and which are by principle not executable, no ?
cheers,
Patrick.

I agree perfectly that we cannot repeat an experiment infinitely (at least in a finite time). However, the infinite sequences are just the one that can give an almost sure correct result:
If n is finite, the frequency you compute on the finite sequence experimental trial is the binomial law (the probability law on the finite sample space {H,T}^n) assuming the independence of the trials. In this case, the probability to get a different result (of P=50% H,50%T) for the frequency calculation is non null and worse than for the infinite sequence (proba = 0).
In other words, we have a greater probability to get the sequence (H,H, ... H) n times in an experimental trial rather than an infinite sequence (H,H,H ...) (Probability 0): the finite sequences have more chances to give false results than the infinite sequences.
Therefore, the only chance (theory) to get an almost sure correct result is for the infinite sequence: where the probability space is uncountable => we still have sequences that may give false results (almost sure result).
With probability, we never know if the experimental trial (finite or infinite) is the correct one: we just have a probability value to quantify this result.

Do not forget that the sample space of the “n sequence experimental trials” (n finite or infinite) contains all the possible sequences: the sequences that converge to the probability law and the sequences that do not converge. The only distinction between these points of the sample space is the probability law (null or non null). And a null probability does not mean that the event never occurs, just that it is unlikely.

Seratend.
 

1. What is quantum measurement?

Quantum measurement is the process of obtaining information about a quantum system, such as the position or momentum of a particle. It involves interacting with the system in a way that alters its state, allowing us to gather information about its properties.

2. What is the CNOT gate in quantum computing?

The CNOT (Controlled-NOT) gate is a fundamental quantum logic gate that operates on two qubits (quantum bits) and is used to perform logical operations in quantum computing. It flips the second qubit if and only if the first qubit is in the state |1>, otherwise it leaves the second qubit unchanged.

3. What is Bell's Theorem and why is it important?

Bell's Theorem is a fundamental principle in quantum physics that states that certain predictions of quantum mechanics cannot be reproduced by any classical theory. It demonstrates the existence of non-local correlations between entangled particles, which has important implications for our understanding of reality and the nature of quantum mechanics.

4. How is quantum measurement related to Bell's Theorem?

Quantum measurement plays a crucial role in understanding Bell's Theorem. In order to test the predictions of the theorem, measurements must be performed on entangled particles. These measurements demonstrate the non-local correlations predicted by Bell's Theorem and provide evidence for the strange and counterintuitive nature of quantum mechanics.

5. What are some applications of quantum measurement and Bell's Theorem?

Quantum measurement and Bell's Theorem have important applications in quantum cryptography, quantum teleportation, and quantum computing. They also have implications for our understanding of fundamental physics and the nature of reality. Additionally, the study of quantum measurement and Bell's Theorem allows us to explore and test the limits of our current knowledge and technology in the field of quantum mechanics.

Similar threads

  • Quantum Physics
Replies
6
Views
2K
Replies
16
Views
1K
  • Quantum Physics
Replies
9
Views
1K
  • Quantum Physics
Replies
5
Views
2K
  • Quantum Physics
Replies
1
Views
928
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
12
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
11
Views
1K
Back
Top