Exploring von Neumann's View of Quantum Measurement

Nicky
Messages
54
Reaction score
0
I have a question about quantum entanglement experiments, such as the two-photon "delayed choice" experiment performed by Aspect et al. http://prola.aps.org/abstract/PRL/v49/i25/p1804_1 . Can anyone estimate how much time elapses between the arrival of a single photon at the detector, and the initiation of a current or voltage that could be considered "macroscopic"? I know too little about photomultipliers and the like to have any idea.

In other words, does the experimental setup rule out the possibility that detector states become entangled with detected photon states, so that the "wavefunction collapse" is actually much later than the arrival of the photon at the detector? I am wondering if the time it takes for the detector to settle is long enough that a timelike signal could pass from one detector to another.

Thanks in advance if anyone can shed light on this ... er .. so to speak.
 
Last edited by a moderator:
Physics news on Phys.org
Nicky said:
I have a question about quantum entanglement experiments, such as the two-photon "delayed choice" experiment performed by Aspect et al. http://prola.aps.org/abstract/PRL/v49/i25/p1804_1 . Can anyone estimate how much time elapses between the arrival of a single photon at the detector, and the initiation of a current or voltage that could be considered "macroscopic"? I know too little about photomultipliers and the like to have any idea.

Photomultipliers usually have response times in the order of a few nanoseconds...

In other words, does the experimental setup rule out the possibility that detector states become entangled with detected photon states, so that the "wavefunction collapse" is actually much later than the arrival of the photon at the detector?

Ha, that's a nice idea :-))) Especially if the "detector" is the person looking at the results of the correlations ;-)

cheers,
Patrick.
 
Last edited by a moderator:
photomultiplier

vanesch said:
Photomultipliers usually have response times in the order of a few nanoseconds...

Hmm ... how much current does the photomultiplier produce at the end of that response time? Is it just a few electrons per millisecond or Avagadro's number of them?
 
Depends how many initial photons go in and what the photosensitive bit's made of!
 
James Jackson said:
Depends how many initial photons go in and what the photosensitive bit's made of!

Only one initial photon goes in. Let's say the photosensor is made of silicon, then how many excited electrons are flowing per second after the response time has elapsed?
 
Nicky said:
In other words, does the experimental setup rule out the possibility that detector states become entangled with detected photon states, so that the "wavefunction collapse" is actually much later than the arrival of the photon at the detector? I am wondering if the time it takes for the detector to settle is long enough that a timelike signal could pass from one detector to another.

The Aspect experiment was intended to compensate for this. Over time, many improvements have been made to the process. The current state of the art is much more rigorous. Using fiber optics, distances are much longer and the time varying elements are more sophisticated. Thus the locality issue you describe is ruled out. Please reference:

The 1998 Innsbruck Experiment (EPR with 1 kilometer of separation):
http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080.pdf
by Weihs, Jennewein, Simon, Weinfurter and Zeilinger
 
Last edited by a moderator:
Nicky said:
Hmm ... how much current does the photomultiplier produce at the end of that response time? Is it just a few electrons per millisecond or Avagadro's number of them?

You get a short current pulse which integrates, to say, a few tens of femtocoulomb (say, 100000 electrons). That's good enough to be seen with a charge-sensitive amplifier. The pulse itself takes a few nanoseconds, and during that time, currents of the order of a few microamperes flow from the last anode.

cheers,
Patrick.
 
DrChinese said:
The Aspect experiment was intended to compensate for this. Over time, many improvements have been made to the process. The current state of the art is much more rigorous. Using fiber optics, distances are much longer and the time varying elements are more sophisticated. Thus the locality issue you describe is ruled out. Please reference:

The 1998 Innsbruck Experiment (EPR with 1 kilometer of separation):
http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080.pdf
by Weihs, Jennewein, Simon, Weinfurter and Zeilinger

Thanks for the reference. Yes, that does seem to settle the locality issue.
 
Last edited by a moderator:
Nicky said:
Thanks for the reference. Yes, that does seem to settle the locality issue.

It solves part of the issue. That is, *if* A and B are causally
affecting each other, then these causal influences must be
travelling faster than light. We can be pretty sure of that.

What we can't be sure of yet is whether or not A and B
are causally affecting each other.

Bell-type analyses show that the current state of the
art of descriptive physics is quantitatively inadequate.
Quantum theory isn't descriptive physics. So, there's
no qualitative understanding of how the correlations are
produced. They might be due to local interactions or
they might be due to superluminal interactions. Nobody
knows.

The question of whether or not nonlocal causality
is a fact of nature remains unanswered.
 
Last edited:
  • #10
Sherlock said:
It solves part of the issue. That is, *if* A and B are causally
affecting each other, then these causal influences must be
travelling faster than light. We can be pretty sure of that.

What we can't be sure of yet is whether or not A and B
are causally affecting each other.

Bell-type analyses show that the current state of the
art of descriptive physics is quantitatively inadequate.
Quantum theory isn't descriptive physics. So, there's
no qualitative understanding of how the correlations are
produced. They might be due to local interactions or
they might be due to superluminal interactions. Nobody
knows.

The question of whether or not nonlocal causality
is a fact of nature remains unanswered.


This is correct. The problem with these situations is that they are in a "twilight zone": on one hand the Bell conditions are violated. But on the other hand, there is no way to have an information transfer about the *choice* of polarizer of A by B. If there were such a transfer (that means, that B, by purely looking at his data, can find out what was the polarizer setting of A) then for sure there was a faster-than-light causal influence. But there is no such information transfer (B cannot find out what was the polarizer setting at A), and it can be shown that such transfer is impossible in quantum theory. You can only find out that there was a peculiar correlation by *bringing together* the data from both sides. And that leaves the possibility for locality to be still valid, depending on what is your view on quantum theory.

As Sherlock said, the safest attitude is to say that nobody knows if locality holds or not as a fundamental principle.

cheers,
Patrick.
 
  • #11
Sherlock said:
It solves part of the issue. That is, *if* A and B are causally
affecting each other, then these causal influences must be
travelling faster than light. ...

Has there been any attempt to measure the speed of such faster-than-light influences? It should at least be possible to find a lower bound for that speed, one would think.
 
  • #12
Nicky said:
Has there been any attempt to measure the speed of such faster-than-light influences? It should at least be possible to find a lower bound for that speed, one would think.

As I read the reference (Weihs et al), Figure 1, the lower bound was about 10c. That is assuming there is a non-local effect. The usual interpretation is that distance is not a factor, it is always "instantaneous".
 
  • #13
Nicky said:
Has there been any attempt to measure the speed of such faster-than-light influences? It should at least be possible to find a lower bound for that speed, one would think.

You can't, really. As I said, the reason is that you only see the correlations, once you've brought together the data from both sides, using classical, slower-than-light communication.

If you mean, can you measure "the speed with which one can do measurements at both sides", then this leaves me wondering what you are talking about. For instance, let us place ourselves in a non-Bell context, with classical correlations. Imagine the usual game: I have a white ball and a red ball, and put randomly one in a grey bag, and the other one in a green bag. The grey bag is sent to Tokyo, the green bag is sent to London. We decide that at 12 AM GMT, both bags will be opened and looked at. So now we have a "correlation" between both results. What is the speed at which this correlation is established ?
In a similar way, what does it mean for the "correlation to propagate between both measurements" ? I can do (in a certain reference frame) the measurement at A slightly before, or slightly after, the measurement at B. In another reference frame (using relativity), I can inverse the order in which the measurements occur. In all these cases, the results are the same. So how are you going to attach a "speed" to this "propagation of correlation" ??

cheers,
Patrick.
 
  • #14
DrChinese said:
As I read the reference (Weihs et al), Figure 1, the lower bound was about 10c. That is assuming there is a non-local effect.

What exactly was that ? I wonder what it can mean...

cheers,
Patrick.
 
  • #15
vanesch said:
You can't, really. As I said, the reason is that you only see the correlations, once you've brought together the data from both sides, using classical, slower-than-light communication.

If you mean, can you measure "the speed with which one can do measurements at both sides", then this leaves me wondering what you are talking about. ... what does it mean for the "correlation to propagate between both measurements" ? I can do (in a certain reference frame) the measurement at A slightly before, or slightly after, the measurement at B. In another reference frame (using relativity), I can inverse the order in which the measurements occur. In all these cases, the results are the same. So how are you going to attach a "speed" to this "propagation of correlation" ??

In these two-photon experiments, the polarizers and detectors are at rest with respect to one another, or nearly so. That is the reference frame I'm referring to when talking about the speed at which correlation hypothetically propagates. Of course, that speed will be measured differently by observers moving with respect to the apparatus, as you indicated. If this "propagation" point of view is valid, correlations of the EPR type would decrease or disappear as the distance between measurements increases.
 
  • #16
vanesch said:
What exactly was that ? I wonder what it can mean...

cheers,
Patrick.

I think the idea is: IF there were a causal effect that simply transmitted from A to B telling the polarization to comply with... then at what speed does that causal effect travel?

In Bohmian mechanics, which attempts to insert non-locality explicitly, I don't think there is any limit to the speed of propagation of the correlation effects. If there WERE some such effect, we know that it must be able to propagate at 10 times the speed of light or more. That is per my reading of Weihs, since they specify that the Einstein light cone could have been a tenth the actual size and locality would have still been respected.

Again, this is not standard interpretation of what is happening. You could just as easily say the purported causal correlation effect travels backward in time too - what speed is that?
 
  • #17
DrChinese said:
Again, this is not standard interpretation of what is happening. You could just as easily say the purported causal correlation effect travels backward in time too - what speed is that?

Yes, that is what I had in mind ! It is sufficient to cut 50 cm off one optical fiber or another, and you CHANGE THE ORDER in which things happen, so how can you reasonably define a speed ? Or do they simply take the duration of the two measurements (say, 3 ns) and the distance between the two measurements, and calculate a "speed" from the ratio ?

cheers,
Patrick.
 
  • #18
vanesch said:
What exactly was that ? I wonder what it can mean...

cheers,
Patrick.

I may be wrong but I think the idea is to assume there is a FTL signal causeing the correlation. Then you look at the timeing of the measurements and calculate how fast such a signal would have to be to cause the correlation. It does not change the fact that you still have to look at both ends to see the correlation in the first place. And as far as I know most physicists don't believe in any such signal anyway.
 
  • #19
DrChinese said:
I think the idea is: IF there were a causal effect that simply transmitted from A to B telling the polarization to comply with... then at what speed does that causal effect travel? ... You could just as easily say the purported causal correlation effect travels backward in time too - what speed is that?

It's true that spacelike-separated events don't have a definite time ordering. However, it may be that EPR-type correlations are only allowed across certain spacelike intervals, and not others. Suppose there exists a reference frame in which all allowed correlations appear to propagate forward in time. In that case, the width of the cone enclosing all the corresponding vectors in Minkowski space can be viewed as the "speed" of the correlation signals.

Of course, that implies the existence of a preferred reference frame, at least for quantum correlation phenomena, which is not the current philisophical fashion.
 
  • #20
Nicky said:
It's true that spacelike-separated events don't have a definite time ordering. However, it may be that EPR-type correlations are only allowed across certain spacelike intervals, and not others. Suppose there exists a reference frame in which all allowed correlations appear to propagate forward in time. In that case, the width of the cone enclosing all the corresponding vectors in Minkowski space can be viewed as the "speed" of the correlation signals.

Yes, but even without relativistic considerations, I have the following problem with trying to define a speed of propagation of any influence. Imagine an EPR setup in which the two particles are sent over long optical fibers, one end arrives at Alice, and the other at Bob. Now, Bob's fiber is slightly longer, so Alice measures "first" and Bob measures on average say 0.5 ns later. So we could then define a "speed" of the distance D between Bob and Alice divided by 0.5 ns. But now Bob shifts his photomultiplier 10 cm (0.3 ns) closer, by removing some piece of optical fiber. So now the speed will be something like D / 0.2 ns. Bob moves again his photomultiplier 10 cm closer: this time, Bob clicks first on average... so the speed is then D / (-0.1 ns) ?? No, because now suddenly the influence goes from Bob to Alice... So the speed is D/0.1ns but in the other direction. Given the detection time in a PM (if that's considered the "measurement process" whatever that may mean), some events will be going Alice-> Bob, others will be going Bob-> Alice and some will be damn close to equal times. How do you define a speed in this situation ?

cheers,
Patrick.
 
  • #21
vanesch said:
Yes, but even without relativistic considerations, I have the following problem with trying to define a speed of propagation of any influence. Imagine an EPR setup in which the two particles are sent over long optical fibers, one end arrives at Alice, and the other at Bob. Now, Bob's fiber is slightly longer, so Alice measures "first" and Bob measures on average say 0.5 ns later. So we could then define a "speed" of the distance D between Bob and Alice divided by 0.5 ns. But now Bob shifts his photomultiplier 10 cm (0.3 ns) closer, by removing some piece of optical fiber. So now the speed will be something like D / 0.2 ns. Bob moves again his photomultiplier 10 cm closer: this time, Bob clicks first on average... so the speed is then D / (-0.1 ns) ?? No, because now suddenly the influence goes from Bob to Alice... So the speed is D/0.1ns but in the other direction. Given the detection time in a PM (if that's considered the "measurement process" whatever that may mean), some events will be going Alice-> Bob, others will be going Bob-> Alice and some will be damn close to equal times. How do you define a speed in this situation ?

You have defined the speed already (D/T), though I would think of it as the absolute value |D/T|. Yes, you can change the direction of the supposed "signal propagation" by choosing whether Bob or Alice performs the measurement first; the fact that the Bob->Alice and Alice->Bob results are indistinguishable only shows the symmetry of Bob's and Alice's views of the experiment.

The interesting result would be to find that for a large enough value of |D/T|, the correlation effect disappears, i.e. the Bell Inequality is no longer violated, or perhaps to find that the threshold value of |D/T| is anisotropic with respect to the direction of the Bob-Alice axis. This would tend to support a "superluminal signal" view of wavefunction collapse.

No such effect has been seen yet, but has it been positively ruled out?
 
  • #22
Nicky said:
... This would tend to support a "superluminal signal" view of wavefunction collapse.

No such effect has been seen yet, but has it been positively ruled out?

Technically, no. As noted, we are at 10c (or maybe -10c) and we can expect that to rise over time as distances increase. I don't expect us to ever see anything happen in this regard - i.e. a finite interval.

This does allow for some interesting speculation as to how to describe what is happening. In my opinion, there is something fundamental about the act of observation: if there are no hidden variables, then the photon polarization was not determinate prior to the observation. So what happens at the time of observation to explain the results?

a) We have Vanesch's favorite, Many Worlds. There is branching.
b) Perhaps in some rolled up unseen spatial dimension, the photons are actually not separated by any distance at all. Thus there is no superluminal effect because there is no distance to traverse.
c) Perhaps there is in fact a new superluminal force carrier - "chanceons".
d) Perhaps (my favorite speculation) the observation of one photon causes an effect to propagate to the past at the speed of light, and then change direction so it travels forward in time to the other photon. This would exactly trace a light cone that is consistent with observed results. Of course, I have no idea how or why such behavior occurs sometimes but not others, and this is a completely ad hoc idea. The combo of traveling both forward and backward in time allows arbitarily large distances to be traversed in arbitrary time intervals. Which is more or less what appears to happen anyway :)
 
  • #23
DrChinese said:
This does allow for some interesting speculation ...

a) We have Vanesch's favorite, Many Worlds. There is branching.

From what little I know about the EPR problem, I would tend to agree with Vanesch that Many Worlds is correct, but only if no evidence of superluminal propagation can be found.

b) Perhaps in some rolled up unseen spatial dimension, the photons are actually not separated by any distance at all. Thus there is no superluminal effect because there is no distance to traverse.

I would think (b) implies a superluminal effect after all. It's the effective speed of the signal in the lab frame that matters, not the speed in the particle's own frame.

c) Perhaps there is in fact a new superluminal force carrier - "chanceons".

This is my favorite explanation for EPR, except that the "chanceons" wouldn't carry any energy -- they can only select between energetically degenerate states. Otherwise you'd get superluminal transmission of energy and/or information, leading to causal paradoxes.

d) Perhaps (my favorite speculation) the observation of one photon causes an effect to propagate to the past at the speed of light, and then change direction so it travels forward in time to the other photon. ...

Interesting idea ... it does have some attributes of "Many Worlds" though, since the past which receives the backward-propagated effect isn't the same past that you started out with. Would it be "Many Histories"?
 
  • #24
Nicky said:
Has there been any attempt to measure the speed of such faster-than-light influences? It should at least be possible to find a lower bound for that speed, one would think.

There is a group headed by Gisin who have performed these types of experiments, and I have one reference [1] (4 years old now) that sets a lower bound at 2/3 * 10^7 c (!). Basically they did an EPR type experiment with entangled photons sent via optical fiber network to two villages near Geneva, with the source smack dab in the center. One detector is set spinning at some high angular velocity so that the frames of reference of each detector are not the same, and such that *each* detector, in its own frame, is the first to do the measurement! Pretty cool, eh?

I agree with some of the other comments that there is no actual "speed" because there is nothing actually being transferred from A to B. And I agree with vanesch that the MWI is conceptually the simplest way to understand what's going on here.

David


PS Patrick - wanted to make sure you saw my post on the Born and MWI thread ;)


[1] Zbinden et al. Experimental test of non-local quantum correlation in relativistic configurations. quant-ph/0007009
See also Gisin, Scarini, Tittel, and Zbinden quant-ph/0009055
 
  • #25
Nicky said:
You have defined the speed already (D/T), though I would think of it as the absolute value |D/T|. Yes, you can change the direction of the supposed "signal propagation" by choosing whether Bob or Alice performs the measurement first; the fact that the Bob->Alice and Alice->Bob results are indistinguishable only shows the symmetry of Bob's and Alice's views of the experiment.

Yes, I understand that. What I wanted to say is that certain events in the sample will be almost perfectly synchronized, making the speed arbitrary high.

The interesting result would be to find that for a large enough value of |D/T|, the correlation effect disappears, i.e. the Bell Inequality is no longer violated, or perhaps to find that the threshold value of |D/T| is anisotropic with respect to the direction of the Bob-Alice axis. This would tend to support a "superluminal signal" view of wavefunction collapse.

I have difficulties with that idea, in that brick walls, rivers, tunnels etc... don't seem to do anything.

No such effect has been seen yet, but has it been positively ruled out?

My personal theory is that beyond a spacelike separation of 70802 lightyears, the effect suddenly disappears :-p But I can't get funding for an experimental verification... :redface:

cheers,
Patrick.
 
  • #26
vanesch said:
Yes, I understand that. What I wanted to say is that certain events in the sample will be almost perfectly synchronized, making the speed arbitrary high.

Arbitrarily high speed does not contradict the superluminal hypothesis, unless it is also coupled with arbitrariness of direction. The suggestion has been made that all EPR-type signals are instantaneous relative to a single, preferred frame of reference (see quant-ph/0110160). If the laboratory is at rest relative to the preferred frame, or nearly so, measured speeds would be arbitrarily high.

I have difficulties with that idea [disappearance of EPR-like effects for some spacelike intervals], in that brick walls, rivers, tunnels etc... don't seem to do anything.

I don't understand the connection with macroscopic objects. What do you mean?

My personal theory is that beyond a spacelike separation of 70802 lightyears, the effect suddenly disappears :-p But I can't get funding for an experimental verification... :redface:

Hey, if you are patient enough to wait 70802 years for the results to reach you, you deserve funding! :redface:
 
  • #27
Nicky said:
I don't understand the connection with macroscopic objects. What do you mean?

I meant that no attenuation is seen in the effect, whether it is on the same optical table in the lab, or through fibers going to the other side of the town, crossing rivers etc...
If something had to propagate, you'd think that it matters somehow if that something is going to go through a lot of stuff or not. You can argue that neutrinos wouldn't be hampered either, but: 1) you'd have a 1/r^2 effect if the chanceons are emitted in a sphere and 2) don't forget that the chanceons have to interact with a photon to change its polarization !
So whatever it is that "propagates" it is going to be something real weird. A bit like the strange mechanical eather in which EM waves had to vibrate in the 19th century.

cheers,
Patrick.
 
  • #28
straycat said:
One detector is set spinning at some high angular velocity so that the frames of reference of each detector are not the same, and such that *each* detector, in its own frame, is the first to do the measurement! Pretty cool, eh?

Yup ! Didn't know about that one !

PS Patrick - wanted to make sure you saw my post on the Born and MWI thread ;)

Oops, must have overlooked it ! I'll have a look...

cheers,
Patrick.
 
  • #29
vanesch said:
I meant that no attenuation is seen in the effect, whether it is on the same optical table in the lab, or through fibers going to the other side of the town, crossing rivers etc...
If something had to propagate, you'd think that it matters somehow if that something is going to go through a lot of stuff or not. You can argue that neutrinos wouldn't be hampered either, but: 1) you'd have a 1/r^2 effect if the chanceons are emitted in a sphere and 2) don't forget that the chanceons have to interact with a photon to change its polarization !
So whatever it is that "propagates" it is going to be something real weird. A bit like the strange mechanical eather in which EM waves had to vibrate in the 19th century.

cheers,
Patrick.

I agree... IF there is some superluminal effect being broadcast... how does it know to only show up alongside the two entangled photons and nothing else? And it is not blocked by intervening objects?

Or maybe there isn't anything like this... :)
 
  • #30
vanesch said:
I meant that no attenuation is seen in the effect, whether it is on the same optical table in the lab, or through fibers going to the other side of the town, crossing rivers etc...
If something had to propagate, you'd think that it matters somehow if that something is going to go through a lot of stuff or not.

Whatever it is that propagates faster-than-light (if anything does) must have zero scattering cross section with any state of any particle, except the one state it is meant to cancel. Otherwise there would be superluminal energy transfer with the particle from which it scatters, which is causally forbidden.

You can argue that neutrinos wouldn't be hampered either, but: 1) you'd have a 1/r^2 effect if the chanceons are emitted in a sphere and 2) don't forget that the chanceons have to interact with a photon to change its polarization !

It can't be neutrinos, since they carry energy and lepton quantum numbers with them, and hence classical information.

If there is a "chanceon", imagine its wavefunction. Presumably the chanceon sees very low potential near the photon detection events, and very high potential everywhere else, so the wavefunction is two small, dense "dots" with very tiny magnitude everywhere else. It's more like it's tunneling from one detector to the other, rather than propagating through space like a wave.

So whatever it is that "propagates" it is going to be something real weird. A bit like the strange mechanical eather in which EM waves had to vibrate in the 19th century.

Well we already know nature is weird ... it's just a question of what flavor of weirdness is out there. Hopefully there will be many more experiments that help to answer the question.
 
  • #31
Nicky said:
Well we already know nature is weird ... it's just a question of what flavor of weirdness is out there. Hopefully there will be many more experiments that help to answer the question.

Assuming that nature is weird seems ill-advised, imo. The
most likely scenario leading to a physical explanation
of the correlations will, I think, come from more detailed
experimental analyses and descriptions of the behavior of
the emitted light associated with photon detections.

If A and B are communicating in some way, then it would
be via disturbances in some medium more fundamental
than the electromagnetic medium.

But there's nothing so far known that would indicate
that such a medium exists.

Also, the speed of light is closely linked to the
rate of the expansion of the universe. The expansion
drives everything. It would be very weird indeed if
disturbances were propogating many orders of magnitude
faster than the rate of the expansion.

Keep in mind that the existence of superluminal
signalling is suggested simply because there isn't
a comprehensive local explanation (in terms of the
behavior of the emitted light associated with
photon detection) for the correlations.
 
  • #32
Sherlock said:
Also, the speed of light is closely linked to the
rate of the expansion of the universe. The expansion
drives everything. It would be very weird indeed if
disturbances were propogating many orders of magnitude
faster than the rate of the expansion.

If I understand what I think you mean... :smile:

Formerly, the expansion of the universe was generally believed to be at c or less and the basic concept was that we live in a "flat" universe. Is this the relationship you are referring to?

If so... there is convincing (possibly overwhelming) evidence that the expansion of the universe today is much faster than c. There are conflicting values for this expansion, but 3c would probably be a lower limit. Not that this is the place to discuss this... :redface:
 
  • #33
DrChinese said:
Formerly, the expansion of the universe was generally believed to be at c or less and the basic concept was that we live in a "flat" universe. Is this the relationship you are referring to?

If so... there is convincing (possibly overwhelming) evidence that the expansion of the universe today is much faster than c. There are conflicting values for this expansion, but 3c would probably be a lower limit. Not that this is the place to discuss this... :redface:

It's connected, more or less, to what's being discussed. :-)
But, I won't belabor it -- not being up on the latest stuff --
except to say that the prevailing notion that the universe is
flat and expanding on very large scales at a few times c and
locally at around c seems to me to be important when considering
hypothetical superluminal signals between A and B.

The idea is that the rate of the expansion sets the speed limit
for the propagation of disturbances -- since the energy of
the expansion is why there are any disturbances propagating
in the first place.
 
Last edited:
  • #34
Nicky said:
If there is a "chanceon", imagine its wavefunction. Presumably the chanceon sees very low potential near the photon detection events, and very high potential everywhere else, so the wavefunction is two small, dense "dots" with very tiny magnitude everywhere else. It's more like it's tunneling from one detector to the other, rather than propagating through space like a wave.

And to explain their correlations then, we introduce still faster luckyons ? :biggrin:

Let us not forget that we're talking about an effect which is perfectly well described by current quantum theory, so it is a bit strange that we should be introducing concepts which are not present in current quantum theory to explain them... That's BTW why I'm an advocate of an MWI view on these things (until we have some other theory): it works with what we have on our hands right now. Nothing stops us from speculating about theories that will replace quantum theory, but first of all, we don't have them, and second, there's no point in trying to find theories that explain only EPR. EPR is a particularly spectacular example of the measurement problem, but it is present in about all applications of quantum theory. It just limits the kind of solutions to it.

cheers,
Patrick.
 
  • #35
vanesch said:
Let us not forget that we're talking about an effect which is perfectly well described by current quantum theory, so it is a bit strange that we should be introducing concepts which are not present in current quantum theory to explain them...

Isn't this (a manifestation of) the (measurement)
problem -- that the details of what produces the
correlations aren't *well described* -- that the
physical reason(s) for why the projection works
aren't articulated in qm?

Quantum theory presents virtually no
conceptual picture of physical reality. We're
just relating data according to the rules of
a (more or less) consistent mathematical structure.
So ... there is speculation. Superluminal signals
or more complicated emission waveforms are
introduced to account for the correlations.

I'm particularly fond of your luckyon. And, at
this time would like to introduce the subdivisions,
happygo and notso (depending on whether you're
doing the experiment in Fort Lauderdale or Newark).

And, by the way, thanks for the comments and
interesting discussions (and references) in the Measurement
Problem thread.
 
Last edited:
  • #36
vanesch said:
And to explain their correlations then, we introduce still faster luckyons ? :biggrin:

Let us not forget that we're talking about an effect which is perfectly well described by current quantum theory, so it is a bit strange that we should be introducing concepts which are not present in current quantum theory to explain them... That's BTW why I'm an advocate of an MWI view on these things (until we have some other theory): it works with what we have on our hands right now. Nothing stops us from speculating about theories that will replace quantum theory, but first of all, we don't have them, and second, there's no point in trying to find theories that explain only EPR. EPR is a particularly spectacular example of the measurement problem, but it is present in about all applications of quantum theory. It just limits the kind of solutions to it.

cheers,
Patrick.

I thought chanceons were faster than luckyons. :-p

I totally agree with you about EPR and its example as a measurement problem. It definitely let's you see how difficult it is to construct alternative theories that explain the effect. It is clear that both chanceons, luckyons, etc have a lot of severe problems. After all, where are they and why don't they show up anywhere else?

These are what I call "ad hoc" hypotheses... and for my money, the cure is worse than the disease. In trying to restore a "physical" mechanism to QM, you introduce new components to QM that add absolutely no new predictive power (as you pointed out). Yet, one also needs to add even more elements just to explain why the new physical mechanism is hidden.
 
  • #37
Sherlock said:
Isn't this (a manifestation of) the (measurement)
problem -- that the details of what produces the
correlations aren't *well described* -- that the
physical reason(s) for why the projection works
aren't articulated in qm?

Of course. But in Newtonian theory, the "details of what produces the force of gravity" aren't well described either ; the difference is only that Newtonian gravity gives us a much cleaner mental picture than QM (of which I maintain that in its unmodified, standard version, you cannot escape a MWI vision in one way or another). It might be that QM is not a universally valid theory but will be replaced by something else (and will be a limiting case of that something else). In that case, all interpretational problems of QM are then shifted to whatever that new theory does, and it might give us a much cleaner view. But I would like to stress that the fact that it is not because we haven't gotten a clean view on QM (apart from MWI and some consciousness-related items, which, I consent, is not particularly clean), that necessarily we need to have another theory.
The analogy with Newtonian theory is there: ok, there IS indeed a more general theory of gravity than Newtonian theory, but one can hardly say that this was mainly necessary because of the fact that "we didn't know the details of what produces the force of gravity". The reason for introducing GR was an incompatibility of fundamental principles between non-relativistic classical physics and electromagnetism (afterwards confirmed by experimental results). However, within general relativity, all metaphysical considerations of explanations of "what were the details of what produces the force of gravity" (angels pushing on planets ?) simply became an irrelevant pondering. Nevertheless, GR has its own metaphysical problems :-)

In the same way, it is not because "we don't know the physical reasons of why the projection works" in QM that this _necessarily_ means that we need a more general theory. Only, if ever we have such a more general theory, probably all our metaphysical pondering about the measurement problem in QM will become irrelevant ... just to be replaced by other metaphysical issues in the new theory.

But maybe we will NOT have such a more general theory for centuries to come. I don't want to take one or other side as long as there is a much more urgent problem to solve: GR vs QM ! It might, or it might not, have something to do with the issue. Nevertheless, the apparent incompatibility between the superposition principle (QM) and general covariance (GR) gives us a feeling of deja-vu...

cheers,
Patrick.
 
  • #38
Patrick, is it correct to say that QM is unitary evolution plus Born, and does not include projection at all? And that projection is just conjectured because we can't really observe the output of Unitarity-Born, a probability? MWI is then a way to have probability with single datum observation, by splitting the cases into separate "observational sectors".
 
  • #39
vanesch said:
... Let us not forget that we're talking about an effect which is perfectly well described by current quantum theory, so it is a bit strange that we should be introducing concepts which are not present in current quantum theory to explain them...

I thought that was just the point Einstein et al. were trying to make in identifying the EPR "paradox". ... that QM is incomplete and needs extension or replacement with a new theory to adequately explain the results.
 
  • #40
Nicky said:
I thought that was just the point Einstein et al. were trying to make in identifying the EPR "paradox". ... that QM is incomplete and needs extension or replacement with a new theory to adequately explain the results.

No, it is not - that was a different issue, now settled.

EPR demonstrated (at least for the time, and in their opinion) an argument that local hidden variables existed. The alternative, they argued, was that there was NOT "simultaneous reality to non-commuting operators". Their case was in fact convincing - if you follow their actual and literal logic and ignore this "extra" assumption they add by hand in final paragraph.

But Bell showed that this final assumption, which seemed so reasonable, was flawed. I.e. assume there WAS "simultaneous reality to non-commuting operators". This lead to a contradiction with the predictions of QM, and subsequent experiments.

So to summarize: EPR believed local hidden variables accounted for missing elements of theory, but that was later demonstrated not to be the case. As Vanesch points out, there is nothing technically inconsistent or missing from QM other than it seems unsatisfying in certain respects. That does not imply in any way that QM won't be enhanced or improved in the future, so please don't draw that conclusion. The question is really: is QM self-consistent over its domain of applicability?

In many ways, it is a shame that the focus on whether QM is "complete" was unfortunate. In a lot of ways, this really comes back to semantics. People would have been looking for local hidden variables no matter what they thought of QM anyway. Eventually, Bell's Theorem would have been discovered and then the debate would have shifted to why/how there are no local hidden variables (the state today). That is still an interesting subject, more so to me anyway than debating whether or not QM is complete.
 
  • #41
selfAdjoint said:
Patrick, is it correct to say that QM is unitary evolution plus Born, and does not include projection at all?

I wouldn't say that, it depends on one's viewpoint. QM is for sure unitary evolution. Some think (hardline MWIers) that that is sufficient ; I think that in one way or another, the Born rule is needed in order to calculate probabilities. To the entity for which these probabilities make sense (the "observer"), the projection ALSO makes sense. Indeed, if the global wavefunction is:
|psi> = a |O1> |sys1> |env1> + b |O2> |sys2> |env2> + c |O3> |sys3> |env3>
in a strict unitary view, then in one way or another, observer O can be in states O1, O2 or O3, and (that's the hokus-pokus that gives problems) is only aware of ONE of these states, with probabilities |a|^2, |b|^2 and |c|^2 respectively.

In a von Neumann view, the above state is the result of the "pre-measurement interaction" and the "measurement" gives then rise to a projection: in the case O is in state O2, (which could happen with probability |b|^2), the STATE is now just |O2> |sys2> |env2>. Projection "ontologically" happened.

In an MWI view, the state |psi> remains what it is, but you happen to be observer number 702340 which is associated with the state O2.
However, FOR THAT PARTICULAR OBSERVER, everything happens AS IF the state is now |O2> |sys2> |env2> (on the condition that env2 remains for ever essentially orthogonal to all other |envx> states). So in this case, the projection doesn't really ontologically happen, but it does happen for all practical purposes FROM THE STANDPOINT OF OBSERVER O2, in that this is the only part of the wavefunction he will still have to care about ; he can, if he likes, continue to calculate what happens to his twin observers in other branches but it won't make a difference to him.

There is in fact only a difference between the von Neumann and the MWI view if some ontological status is given to the wavefunction. In both, the projection plays a role, but in the von Neumann view, something "happened" to the wavefunction, while in the MWI view, it is a "trick to only work with the relevant part of the wavefunction from the point of view of an observer".

In a true Copenhagen view (which is often confused with a von Neumann view), things are less clear. In a Copenhagen view, it somehow doesn't make sense to talk about the quantum state of the observer which lives in a classical world. So you cannot talk about a premeasurement evolution in this language, because there is no quantum mechanical description of the observer, or even the macroscopic measurement system. There, the projection somehow happens directly to the microscopic system and it is impossible to analyse deeper the interaction between the measurement apparatus and the system (at least this is how I understand it).

In the viewpoint that the wavefunction does not have any ontological status, then there is no difference between von Neumann and MWI ; however, I think that there is a difference between Copenhagen on one side (which gives ontological status to all "classical" objects, and denies it to all quantum objects) and MWI/von Neumann on the other hand, in which or the wavefunction has an ontological status, or nothing has.

Given that Copenhagen (if I understand it well) doesn't give any ontological status to the wavefunction, I guess that the projection also doesn't mean much: it is just calculational machinery that somehow makes classical measurement apparatus give results.

EDIT: I might try to add the strong points and weaknesses of each viewpoint:

- in von Neumann and MWI, not giving an ontological status of the wavefunction (in which case there is no difference between both) deprives physics in fact of all forms of ontology. We only have observers which know results of measurements, but nothing is really there, because everything is in fact described by a wavefunction. So that's strange: we know results of measurements about nothing.

- in von Neumann and MWI, giving an ontological status to the wavefunction (which I think is the only sensible thing to do), I think that von Neumann has the advantage of an ontology which corresponds to our observations (measured things really happened), but has the problem of the fact that a measurement by a single individual (what is that ?) changes the objective state of the entire universe, moreover in a non-local way.
MWI has the advantage of implying that a measurement only affects the observer and also that locality is respected, but has two other problems: the first one is that the measurement doesn't correspond to "what is really there" and the second one is that one needs to make a difference between the physical construction of the observer (the body) and his "mind"/"consciousness" which is associated with it ; it being the last one which is in fact the "true observer".

- Copenhagen as such, to me, cries out for a more general theory, in which quantum theory is the limiting case for "microscopic" systems, and classical physics a limiting case for "macroscopic systems". But EPR situations make it damn hard to define what is microscopic and what is macroscopic. it might very well be that Copenhagen is the crude version of what is to come. However, as it is, today, it has the disadvantage of not defining where is this famous boundary between quantum and classical behaviour.
cheers,
Patrick.
 
Last edited:
  • #42
This von Neumann view intrigues me. It sounds like the preparation interaction is pre-consciousness, as if it could happen without an observer. True?
 
  • #43
selfAdjoint said:
This von Neumann view intrigues me. It sounds like the preparation interaction is pre-consciousness, as if it could happen without an observer. True?

Yes, it is exactly the same unitary evolution as in MWI ! And to say that von Neumann wrote that in the 194ies... I take his book from the shelf and read...

On p 418 of his monumental book, we read:

"Let us now compare these circumstances with those which actually exist in nature or in its observation. First, it is inherently entirely correct that the measurement or the related process of the subjective perception is a new entity relative to the physical environment and is not reducible to the latter. Indeed, subjective perception leads us into the intellectual inner life of the individual, which is extra-observational by its very nature (since it must be taken for granted by any conceivable observation or experiment). Nevertheless, it is a fundamental requirement of the scientific viewpoint - the so-called principle of the psycho-physical parallelism - that it must be possible so to describe the extra-physical process of the subjective perception as if it were in reality in the physical world - i.e. to assign to its parts equivalent physical processes in the objective environment, in ordinary space. Of course in this correlating procedure there arises the frequent necessity of localizing some of these processes at points which lie within the portion of space occupied by our own bodies. But this does not alter the fact of their belonging to the world about us, the objective environment referred to above. In a simple example, these concepts might be applied about as follows: we wish to measure a temperature. If we want, we can pursue this process numerically until we have the temperature of the environment of the mercury container of the thermometer, and then say: this temperature is measured by the thermometer. But we can carry the calculation further, and from the properties of the mercury, which can be explained in kinetic and molecular therms, we can calculate its heating, expansion and the resultant length of the mercury column, and then say: this length is seen by the observer. Going still further and taking the light source into consideration, we could find out the reflection of the light quanta on the opaque mercury column, and the path of the remaining light quanta into the eye of the observer, their refraction in the eye lens, and the formation of an image on the retina, and then we would say: this image is registered by the retina of the observer. And were our physiological knowledge more precise than it is today, we could go still further, tracing the chemical reactions which produce the impression of this image on the retina, in the optic nerve tract and in the brain, and then in the end say: these chemical changes of his brain cells are perceived by the observer. But in any case, no matter how far we calculate, to the mercury vessel, to the scale of the thermometer, to the retina, or into the brain, at some time we must say: and this is perceived by the observer. That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former, we can follow up all physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless.
The boundary between the two is arbitrary to a very large extent. In particular we saw in the four different possibilities in the example above, that the observer in this sense needs not to become identified with the actual body of the actual observer: [...]
Indeed, experience only makes statements of this type: an observer has made a certain (subjective) observation ; and never anything like this: a physical quantity has a certain value.
Now quantum mechanics describes the events which occur in the observed portions of the world, so long as they do not interact with the observing portion, with the aid of process 2 (unitary evolution), but as soon as such an interaction occurs, (a measurement) it reuires the application of process 1. The dual form is therefore justified."

Then, on p 437 - p445 he explains in painstaking detail the "premeasurement" (not his terminology) and the measurement, when he considers I the system, II the apparatus (+ part of the body, eventually brain included) and III the "observer". It is the last one which applies the projection postulate... Hey, this is closer to MWI than I ever thought !

cheers,
Patrick.
 
Back
Top