What Is Surprising About Wave Function Collapse?

In summary, the conversation discusses the concept of wave function collapse in quantum mechanics and the idea that an external observer is needed to determine when a measurement outcome is seen. This leads to the problem of "measurement problem" and the fact that the theory only predicts probabilities of observation results. The conversation also touches on the double slit experiment and the idea that particles do not have a position until they are measured. The conversation ends with a discussion on the difficulties of understanding quantum mechanics and the need to let go of classical beliefs.
  • #106
vanhees71 said:
No, beams are not trajectories of particles in a classical sense. That's the whole point of this example! After the magnet of a properly constructed SG apparatus, you have a sufficiently good separation of beam-like regions of space, where only silver atoms in FAPP pure ##\sigma_z=+\hbar/2## states are found. I wrote FAPP, because in fact there's always a tiny probability to find a silver atom at such a place with ##\sigma_z=-\hbar/2##, but you can make this tiny probability as tiny as you wish. That's why I wrote FAPP. Just looking at silver atoms in this region of space is the only thing you need to have an ensemble of silver atoms prepared in a (FAPP) pure ##\sigma_z=+1/2## state. No collapse argument is necessary to make this preparation. Note that a collapse is necessary only for state preparations, not for measurements, which usually destroy the object observed (like a photon hitting a photo/CCD plate, a particle being absorbed in ALICES calorimeter, and so on), and you don't need to bother about what state it might be into be described for later measurements ;-).

So you reject that it is possible to do preparation, measurement A followed by measurement B?
 
Physics news on Phys.org
  • #107
Demystifier said:
I answered it in post #99. I cannot understand how can someone simultaneously be both a) agnostic about ontology and b) non-agnostic about collapse.
I think the point it not to be non-agnostic about collapse, but rather not to apply it in situations, where it isn't needed. The Stern-Gerlach experiment is one such situation and vanhees71 is arguing (if i understood him correctly) that any other possible experiment is also such a situation. Now this may be correct or not, but I can't think of a situation, where it is not the case. However, I agree to stay agnostic about it.

atyy said:
Exactly what Demystifier said - if we do not make an ontological commitment, then we do have collapse. It is percisely because the wave function is not real, that collapse is needed.
I don't find this argument convincing. The wave function may or may not be real, but it certainly contains information about "what is going on". There may be situations, where a collapse would discard too much of this information. We usually collapse the wave-function in situations, where we are fairly certain that the information we loose is not relevant for the further description of the system. But then again, it can't hurt to carry around the irrelevant information. It just complicates the description, so we usually don't do it.

"given that we are not assigning ontological status to anything, let alone the state-vector, then you are free to collapse it, uncollapse it, evolve it, swing it around your head or do anything else you like with it. After all, if it is not supposed to represent anything existing in reality then there need not be any physical consequences for reality of any mathematical manipulation, such as a projection, that you might care to do."
Well, as I said, the wave function contains information about "what is going on", so you better only apply manipulations that don't discard relevant information, if you want to end up with something that can still be used to describe physics. But apart from that, I agree with Matt Leifer.

Collapse is a standard part of the minimal interpretation. As we discussed before, one does not need it is one does not do successive measurements. However, vanhees71 has not yet rejected successive measurements.
If you want to describe successive measurements using the filtering framework, you need to include the apparatus in the description. The information that is usually lost during collapse is then just hidden in the description of the apparatus and we may or may not discard it. Since the information has become irrelevant for the further description, the predictions of the theory aren't influenced by our decision.

Once again, I stress that vanhees71 is making a technical error, so this debate is not a matter of taste. He is rejecting the textbook formulation of quantum mechanics, eg. Nielsen and Chuang or Holevo or Weinberg.
I think vanhees71 is still using textbook QM. He's just using a more sophisticated description of the system that doesn't discard irrelevant information.
 
  • Like
Likes bhobba and vanhees71
  • #108
vanhees71 said:
Note that a collapse is necessary only for state preparations, not for measurements, which usually destroy the object observed
1) Now you finally admit that a collapse is necessary for something. That's a progress.
2) As I already explained in another post, the "destruction" in the one-particle Hilbert space can be described as a collapse in a larger Hilbert space of full quantum field theory.
 
Last edited:
  • #109
rubi said:
I think vanhees71 is still using textbook QM. He's just using a more sophisticated description of the system that doesn't discard irrelevant information.

As we have agreed (I think), one can do without collapse if one rejects successive measurements. However, vanhees71 has not yet articulated this assumption, and I would like to see it clearly articulated before collapse is rejected.
 
  • #110
rubi said:
He's just using a more sophisticated description of the system that doesn't discard irrelevant information.
If so, then why is he not using an even more sophisticated description, which does not discard the irrelevant information associated with state preparation?
 
  • #111
Demystifier said:
1) Now you finally admit that a collapse is necessary for something. That's good.
2) As I already explained in another post, the "destruction" in the one-particle Hilbert space can be described as a collapse in a larger Hilbert space of full quantum field theory.
Argh! I should have said

"Note that a collapse is assumed to be necessary by collapse proponents only for state preparations, not for measurements, which usually destroy the object observed..." I still don't consider it a necessary part of QT nor one that can be defined in an unambiguous and consistent way!

What do you mean by "destruction" in the one-particle Hilbert space? My arguments about the SG experiment work fully in the realm of non-relatistic single-particle quantum mechanics for a spin-1/2 particle desribed by the Pauli equation. Perhaps, you can understand enough of my corresponding section in my German QM 2 manuscript. It's all pretty simple textbook QM:

http://theory.gsi.de/~vanhees/faq/quant/node102.html#potel2005quantum

for a more complete (numerical) investigation of the "spin-flip probability", leading to (practically arbitrarily small) contaminations of the "spin-up beam" with spin-down silver atoms, see

G. Potel, F. Barranco, S. Cruz-Barrios, J. Gómez-Camacho, Quantum mechanical description of Stern-Gerlach experiments, Phys. Rev. A 71 (2005).
http://dx.doi.org/10.1103/PhysRevA.71.052106
 
  • Like
Likes bhobba
  • #112
vanhees71 said:
Argh! I should have said

"Note that a collapse is assumed to be necessary by collapse proponents only for state preparations, not for measurements, which usually destroy the object observed..." I still don't consider it a necessary part of QT nor one that can be defined in an unambiguous and consistent way!

Of course. Collapse is only necessary for preparations that are a result of measurements. In other words, collapse is needed if one does preparation, then measurement A, then measurement B. In such a case, measurement A is the preparation procedure for measurement B, and that is where collapse is needed.

So if you reject collapse, you reject that it is possible to do successive measurements. It is fine to reject successive measurements, but it is non-standard, so you should state it explicitly, just as when other non-standard assumptions like hidden variables or MWI are used, one has to state them explicitly
 
  • #113
atyy said:
And again, I stress that in the minimal interpretation it is wrong to use a one measurement procedure to argue against collapse. In the minimal interpretation, there is no need for collapse if one does one measurement.
The definition of "measurement" is crucial here. If I remember your position correctly, you define it as the occurrence of an irreversible mark. But irreversibility is not fundamental, so how do you determine whether there's one or two measurements?
 
  • Like
Likes bhobba
  • #114
vanhees71 said:
What do you mean by "destruction" in the one-particle Hilbert space?
I mean (for instance) the phenomenon of the destruction of a single photon in the measurement of photon, abstractly described in the language of states in the Hilbert space for that photon.

Let me be more specific. If |1> is the one-photon state, then the destruction can be described as a transition
|1> --> nothing
where "nothing" means that no state in the Hilbert space is associated with photon(s).

The same process can be more properly described as a collapse in a larger space spanned by |1> and the vacuum |0>, e.g. as a transition
c1 |1>+c0 |0> --> |0> ,
where |c0|^2 is the prior probability that photon will be detected (and consequently destroyed), while |c1|^2 is the prior probability that photon will not be detected (and consequently destroyed).

While "nothing" is not a state in the Hilbert space associated with photons, |0> is a state in the Hilbert space associated with photons. |0> is a state in which the number n of photons is precisely defined and equal to n=0.
 
  • #115
kith said:
But irreversibility is not fundamental, so how do you determine whether there's one or two measurements?
It's not fundamental, but it's usually well defined FAPP (for all practical purposes).

For instance, for the sake of definiteness, one can say that a process is considered FAPP irreversible when its Poincare recurrence time is larger than 1000 years.
 
Last edited:
  • #116
atyy said:
As we have agreed (I think), one can do without collapse if one rejects successive measurements. However, vanhees71 has not yet articulated this assumption, and I would like to see it clearly articulated before collapse is rejected.
I don't want to speak on behalf of vanhees71, but I don't reject successive measurements. I just say that one needs a way more complicated model if one wants to describe them in such a way that the predictions agree with the collapse description. I also don't reject collapse. I just can't think of a situation, where I couldn't come up with a (potentially much more complicated) model that doesn't rely on collapse.

Demystifier said:
If so, then why is he not using an even more sophisticated description, which does not discard the irrelevant information associated with state preparation?
Because he doesn't have the necessary information available. If he did, he might as well use an even more sophisticated description.
 
  • #117
vanhees71 said:
No, it's not assuming that the silver atom starts off in a certain spin-##z## state. The incoming beam is rather in a thermal state given that the beam is extracted from a little oven of hot silver vapor!
My understanding of spin is horribly shaky, but doesn't the x-up,x-down basis span the entire spin state space (not just x-spin)? If I'm right, interaction with the thermal bath leaves the atom in an improper mixed state. Its spin is not merely undefined but is FAPP random: up or down in any direction you care to choose.
 
  • #118
rubi said:
Because he doesn't have the necessary information available. If he did, he might as well use an even more sophisticated description.
That would make sense, if only he could confirm this.
 
  • #119
rubi said:
I don't want to speak on behalf of vanhees71, but I don't reject successive measurements. I just say that one needs a way more complicated model if one wants to describe them in such a way that the predictions agree with the collapse description. I also don't reject collapse. I just can't think of a situation, where I couldn't come up with a (potentially much more complicated) model that doesn't rely on collapse.

Coming up with the more complicated model is what I mean by rejecting successive measurements. In the more complicated model, one uses something similar to the deferred measurement principle. Anyway, I think we agree apart from slight differences in terminology.

The only difference might be one of taste. To me, as long as one does not solve the measurement problem and there is no sense to the "wave function of the univers", if quantum mechanics is just a tool to predict measurement outcome, then it is more convenient to take collapse as a postulate, rather than operating in a very much larger Hilbert space, especially in cases where the successive measurements are time stamped, and one would have to include the measurement apparatus as well as a clock in the Hilbert space. In other words, if quantum mechanics is a tool, then collapse is a powerful tool in that it allows you to take a small Hilbert space. This of course is religion http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf :)
 
  • #120
kith said:
The definition of "measurement" is crucial here. If I remember your position correctly, you define it as the occurrence of an irreversible mark. But irreversibility is not fundamental, so how do you determine whether there's one or two measurements?

Irreversibility is fundamental, because we are operating in a minimal interpretation. There is no unitarily evolving wave function of the universe. After you have made your last measurement, the wave function is discarded.
 
  • #121
atyy said:
Irreversibility is fundamental, because we are operating in a minimal interpretation. There is no unitarily evolving wave function of the universe. After you have made your last measurement, the wave function is discarded.
In other words, irreversibility is fundamental for the operational formulation of QM.
 
  • Like
Likes atyy
  • #122
rubi said:
I don't want to speak on behalf of vanhees71, but I don't reject successive measurements. I just say that one needs a way more complicated model if one wants to describe them in such a way that the predictions agree with the collapse description. I also don't reject collapse. I just can't think of a situation, where I couldn't come up with a (potentially much more complicated) model that doesn't rely on collapse.

I replied to this in post #119, and just wanted to add a bit here. It's fine if one rejects successive measurements, and operates in the larger Hilbert space, and also does not agree that the quantum mechanics predicts that the Bell inequalities are violated at spacelike separation.

However, if one rejects successive measurements, and agrees that quantum mechanics predicts the Bell inequalities are violated at spacelike separation, then I think there is a preferred frame for the calculation - which is fine - but I just wanted to bring this up. There is a preferred frame for the calculation, because two spacelike separated events will be simultaneous in one frame, but not in another.
 
  • #123
atyy said:
Coming up with the more complicated model is what I mean by rejecting successive measurements. In the more complicated model, one uses something similar to the deferred measurement principle. Anyway, I think we agree apart from slight differences in terminology.
Oh I see. Yes, I think we agree then.

The only difference might be one of taste. To me, as long as one does not solve the measurement problem and there is no sense to the "wave function of the univers", if quantum mechanics is just a tool to predict measurement outcome, then it is more convenient to take collapse as a postulate, rather than operating in a very much larger Hilbert space, especially in cases where the successive measurements are time stamped, and one would have to include the measurement apparatus as well as a clock in the Hilbert space. In other words, if quantum mechanics is a tool, then collapse is a powerful tool in that it allows you to take a small Hilbert space. This of course is religion http://mattleifer.info/wordpress/wp-content/uploads/2008/11/commandments.pdf :)
I'm not saying that we should discard the collapse postulate for practical calculations. That would be really stupid, indeed. :) But the question becomes important in quantum gravity, especially in quantum cosmology. It would be very counter-intuitive, to put it mildly, if our actions here on Earth could have any drastic effect on the rest of the universe.

atyy said:
However, if one rejects successive measurements, and agrees that quantum mechanics predicts the Bell inequalities are violated at spacelike separation, then I think there is a preferred frame for the calculation - which is fine - but I just wanted to bring this up. There is a preferred frame for the calculation, because two spacelike separated events will be simultaneous in one frame, but not in another.
This is probably a terminology issue, but I would say that the fact that some events are simultaneous in one frame doesn't make the frame preferred, just like the fact that there is a frame in which the doors of a train open simultaneously doesn't make that frame preferred.
 
  • #124
atyy said:
Irreversibility is fundamental, because we are operating in a minimal interpretation. There is no unitarily evolving wave function of the universe. After you have made your last measurement, the wave function is discarded.
I agree but this is a quite trivial kind of irreversibility. It doesn't imply anything about the irreversibility of intermediate processes. At least not unless you use the term minimal interpretation in a different sense than vanhees71.
 
  • #125
kith said:
I agree but this is a quite trivial kind of irreversibility. It doesn't imply anything about the irreversibility of intermediate processes. At least not unless you use the term minimal interpretation in a different sense than vanhees71.

What is the difference? Fundamentally, you have to impose the measurement from outside. If one allows the unitary evolution to stop due to a measurement, then the measurement is still fundamental.
 
  • Like
Likes Derek Potter
  • #126
atyy said:
What is the difference? Fundamentally, you have to impose the measurement from outside. If one allows the unitary evolution to stop due to a measurement, then the measurement is still fundamental.
Well this started about the number of measurements being well-defined. I don't think there's real dissent anymore.
 
  • #127
kith said:
Well this started about the number of measurements being well-defined. I don't think there's real dissent anymore.

Just in case, the idea then is that if one allows the outside observer to recognize one measurement, then he can also recognize two measurements, etc ... which is why one usually assumes that successive measurements are possible.
 
  • #128
rubi said:
Oh I see. Yes, I think we agree then.

Yes, as far as I can tell we do, so the rest of my remarks are just tiny random comments on terminology or beyond the standard model.
rubi said:
I'm not saying that we should discard the collapse postulate for practical calculations. That would be really stupid, indeed. :) But the question becomes important in quantum gravity, especially in quantum cosmology. It would be very counter-intuitive, to put it mildly, if our actions here on Earth could have any drastic effect on the rest of the universe.

Yes. But in that case, is the minimal interpretation enough? In the minimal interpretation, we still need the external "classical" observer to make the Heisenberg cut, choose the preferred basis (this part maybe can be replaced by a criterion like the predictability sieve), and decide when the measurement outcome occurs (ie. pick a threshold for when decoherence is good enough, since decoherence is never perfect). But the classical observer presumably has a lab in classical spacetime. But can there be a classical spacetime in quantum gravity?

So far the only proposal for a non-perturbative definition of quantum gravity is AdS/CFT in AdS space, where the observer can sit on the "classical" boundary, then quantum mechanics in the bulk is emergent and presumably approximate, especially with all the firewall problems. I think this is why many QG people are interested in non-minimal approaches, like MWI or Rovelli's relational interpretation, since those approaches try to make sense of the wave function of the universe.

Or maybe we can have the external nonlocal observer like http://arxiv.org/abs/hep-th/0106109, whatever that means - it'd be almost like Wheeler's the universe observing itself.

rubi said:
This is probably a terminology issue, but I would say that the fact that some events are simultaneous in one frame doesn't make the frame preferred, just like the fact that there is a frame in which the doors of a train open simultaneously doesn't make that frame preferred.

Well, but in the sense that we agreed not to use successive measurements, then we should not calculate in frames in which the measurements are successive.

Alternatively, we can, but then we only have the report that the Bell inequalities were violated at spacelike separation, which says nothing about whether they were violated at spacelike separation. So this view that we always push the measurements as far back as possible sits more easily with taking the cut so that Bob does not consider Alice to be real at spacelike separation, Alice is only real when she meets Bob face to face.
 
Last edited:
  • #129
atyy said:
Collapse is a standard part of the minimal interpretation. As we discussed before, one does not need it if one does not do successive measurements. However, vanhees71 has not yet rejected successive measurements.

I don't quite understand what the disagreement is about when it comes to successive measurements. I think that it's not too difficult to reformulate standard "minimalist" QM so that instead of being a theory of probabilities for outcomes of observations, it's a theory for computing probabilities for entire histories of observations. The probabilities for histories of observations is probably indistinguishable in practice from what you would get assuming "observation collapses the wave function", but it wouldn't actually describe any particular "event" of collapse, because a theory of histories doesn't have a notion of state, period, and so it doesn't actually capture anything about state changes. The closest there would be to a "state" would be just a record of the history so far.
 
  • #130
atyy said:
Yes. But in that case, is the minimal interpretation enough? In the minimal interpretation, we still need the external "classical" observer to make the Heisenberg cut, choose the preferred basis (this part maybe can be replaced by a criterion like the predictability sieve), and decide when the measurement outcome occurs (ie. pick a threshold for when decoherence is good enough, since decoherence is never perfect). But the classical observer presumably has a lab in classical spacetime. But can there be a classical spacetime in quantum gravity?
I will answer from the perspective of canonical QG. You still have a manifold consisting of events, just like in GR, with the only difference that the metric (or connection, in the connection formulation) is no longer a classical object. An observer is still a timelike curve on the manifold. The difference between canonical QG and quantum field theory is that standard quantum field theory relies on a classical metric and this is no longer given in a quantum gravity context. But once you have overcome this technical difficulty, you have a quantum theory with a Hilbert space and observables and you can use it just like any other quantum theory and compute probabilities and expectation values. Quantum theory doesn't really require a part of the world to be described using classical physics. What it really requires is that there is a reliable measurement apparatus. Whether that apparatus itself is governed by quantum theory or not isn't really relevant. It just needs to spit out numbers in a reproducible manner. That such an apparatus can exist in a degenerate region of spacetime is very unlikely and so I would say that the numbers computed for such regions are meaningless, since there is just no observer who would measure them. However, there are supposed to be regions of spacetime that behave semiclassically and in these regions, the numbers are supposed to be meaningful. In LQG, the existence of such states has been proved at least on the kinematical level. It is still an open problem to find semiclassical states that solve all constraints. Of course, the theory would have to be rejected if such states could be shown to not exist.

So far the only proposal for a non-perturbative definition of quantum gravity is AdS/CFT in AdS space, where the observer can sit on the "classical" boundary, then quantum mechanics in the bulk is emergent and presumably approximate, especially with all the firewall problems. I think this is why many QG people are interested in non-minimal approaches, like MWI or Rovelli's relational interpretation, since those approaches try to make sense of the wave function of the universe.

Or maybe we can have the external nonlocal observer like http://arxiv.org/abs/hep-th/0106109, whatever that means - it'd be almost like Wheeler's the universe observing itself.
Unfortunately, I can't really comment on string theory. However, these problems don't show up in the canonical approach, which is really supposed to be a bona fide quantum theory, comparable to QFT with the only addition that now the metric (really the densitized triads) is a quantum variable as well. (Of course, the canonical approach has its very own problems.)

Well, but in the sense that we agreed not to use successive measurements, then we should not calculate in frames in which the measurements are successive.

Alternatively, we can, but then we only have the report that the Bell inequalities were violated at spacelike separation, which says nothing about whether they were violated at spacelike separation. So this view that we always push the measurements as far back as possible sits more easily with taking the cut so that Bob does not consider Alice to be real at spacelike separation, Alice is only real when she meets Bob face to face.
We can do the calculation in any frame, but we need to transform all the elements we're interested in. So if the frames are related by a unitary transform ##U## and we are interested in some property ##P##, given by a projection operator, then in the new frame, we need to use the transformed state ##U\Psi## as well as the transformed property ##UPU^\dagger##. This ensures that all observers agree on all observable facts (as long as they agree on the states they are using). If we didn't transform the property as well, then the transformed observer would really ask a different question.
 
  • #131
rubi said:
Quantum theory doesn't really require a part of the world to be described using classical physics. What it really requires is that there is a reliable measurement apparatus. Whether that apparatus itself is governed by quantum theory or not isn't really relevant. It just needs to spit out numbers in a reproducible manner. That such an apparatus can exist in a degenerate region of spacetime is very unlikely and so I would say that the numbers computed for such regions are meaningless, since there is just no observer who would measure them. However, there are supposed to be regions of spacetime that behave semiclassically and in these regions, the numbers are supposed to be meaningful. In LQG, the existence of such states has been proved at least on the kinematical level. It is still an open problem to find semiclassical states that solve all constraints. Of course, the theory would have to be rejected if such states could be shown to not exist.

Yes, the term "classical" apparatus just means reliable measurement apparatus that is not included in the wave function.

So the question then is whether the observer in the semiclassical region can still access quantum gravity, or whether he just ends up seeing the semiclassical theory. You know the usual heuristic - to see the QG effect, he will need a big apparatus, then in the process of making the apparatus or the measurement, he will make a black hole ...

Also, the area operator in LQG is not gauge invariant, so are there really local observables? http://arxiv.org/abs/0708.1721

I read your other points too, but it's really just terminology, and I don't think we disagree, so I've stopped commenting on those for now :) But yes, the observables in LQG and the fact that the observer has to live in the semiclassical part spacetime is something I've never really understood whether it will work.
 
  • #132
atyy said:
So the question then is whether the observer in the semiclassical region can still access quantum gravity, or whether he just ends up seeing the semiclassical theory. You know the usual heuristic - to see the QG effect, he will need a big apparatus, then in the process of making the apparatus or the measurement, he will make a black hole ...
Well, the quantum gravity effects can leave imprints on things that can be observed with a classical apparatus. For example, it might happen that QG predicts some absorption lines in the CMB spectrum or so, and the CMB spectrum can in principle be measured to any desired precision (if those pesky experimentalists weren't so lazy :biggrin:). I'm very pessimistic about any direct observation of QG effects, but of course such statements have always eventually turned out to be wrong.

Also, the area operator in LQG is not gauge invariant, so are there really local observables? http://arxiv.org/abs/0708.1721
To be honest, I don't think these geometric operators have any relevance. How do you build an apparatus that measures them? The only relevant geometric operator is the volume operator, because it plays a role in the quantization of the Hamiltonian constraint. It is of course a problem, though, that we don't know any Dirac observables, yet. This will hopefully change in the future. :) However, I don't consider it a conceptional problem.
 
  • Like
Likes atyy
  • #133
rubi said:
Well, the quantum gravity effects can leave imprints on things that can be observed with a classical apparatus. For example, it might happen that QG predicts some absorption lines in the CMB spectrum or so, and the CMB spectrum can in principle be measured to any desired precision (if those pesky experimentalists weren't so lazy :biggrin:). I'm very pessimistic about any direct observation of QG effects, but of course such statements have always eventually turned out to be wrong.

They seem to work quite hard :biggrin:



rubi said:
To be honest, I don't think these geometric operators have any relevance. How do you build an apparatus that measures them? The only relevant geometric operator is the volume operator, because it plays a role in the quantization of the Hamiltonian constraint. It is of course a problem, though, that we don't know any Dirac observables, yet. This will hopefully change in the future. :) However, I don't consider it a conceptional problem.

What, what? That's what I thought, but I've never seen this mentioned in the literature!

Do you buy the heuristic argument that quantum gravity has no local observables?

What about what Rovelli calls partial observables?
 
Last edited:
  • #134
rubi said:
Well, the quantum gravity effects can leave imprints on things that can be observed with a classical apparatus. For example, it might happen that QG predicts some absorption lines in the CMB spectrum or so, and the CMB spectrum can in principle be measured to any desired precision (if those pesky experimentalists weren't so lazy :biggrin:). I'm very pessimistic about any direct observation of QG effects, but of course such statements have always eventually turned out to be wrong.To be honest, I don't think these geometric operators have any relevance. How do you build an apparatus that measures them? The only relevant geometric operator is the volume operator, because it plays a role in the quantization of the Hamiltonian constraint. It is of course a problem, though, that we don't know any Dirac observables, yet. This will hopefully change in the future. :) However, I don't consider it a conceptional problem.

Isn't this just a little bit oversimplified? For a high school level (B) thread, I mean :cool:
 
  • Like
Likes atyy
  • #135
rubi said:
To be honest, I don't think these geometric operators have any relevance.

Is not the area operator used in the derivation of the black hole entropy formula? At least I remember reading that in some papers (they had some sum over the area eigenvalues and that's how the area enters into the entropy formula; the area operator seemed to be the key point in the derivation), e.g., equations (6), (7), (8), (19), (20), (21), here http://arxiv.org/pdf/1204.5122v1.pdf.

My knowledge of LQG is very rudimentary though, so I don't know, maybe you are still right for some reason.
 
  • #136
stevendaryl said:
I don't quite understand what the disagreement is about when it comes to successive measurements. I think that it's not too difficult to reformulate standard "minimalist" QM so that instead of being a theory of probabilities for outcomes of observations, it's a theory for computing probabilities for entire histories of observations. The probabilities for histories of observations is probably indistinguishable in practice from what you would get assuming "observation collapses the wave function", but it wouldn't actually describe any particular "event" of collapse, because a theory of histories doesn't have a notion of state, period, and so it doesn't actually capture anything about state changes. The closest there would be to a "state" would be just a record of the history so far.
It is not true that theory of histories doesn't have a notion of state. See e.g.
http://lanl.arxiv.org/abs/quant-ph/0209123
Eq. (40). It depends on ##\rho(t_0)##, and ##\rho(t_0)## is the state.

But you are right that is does not have a notion of time-dependent state. Yet, it has projectors ##P_i(t_i)## at different times ##t_i##. The act of a projector at time ##t## is practically the same as a collapse at time ##t##.
 
  • Like
Likes Derek Potter
  • #137
stevendaryl said:
I don't quite understand what the disagreement is about when it comes to successive measurements. I think that it's not too difficult to reformulate standard "minimalist" QM so that instead of being a theory of probabilities for outcomes of observations, it's a theory for computing probabilities for entire histories of observations. The probabilities for histories of observations is probably indistinguishable in practice from what you would get assuming "observation collapses the wave function", but it wouldn't actually describe any particular "event" of collapse, because a theory of histories doesn't have a notion of state, period, and so it doesn't actually capture anything about state changes. The closest there would be to a "state" would be just a record of the history so far.

The difference is between (A) having outcomes at different times, versus (B) a report of outcomes at different times. There is no commitment in (B) that the outcomes at different times were real events. It is analogous to Bob drawing the classical/quantum cut in a Bell test such that Alice is not real at spacelike separation, only the report of the events, so that the correlations do not occur at spacelike separation, and there is no nonlocality.

These are such unusual assumptions (no real outcomes at successive times, no spacelike separated real objects) that I think vanhees71 has to state them (no successive measurements), just as a person using less conventional assumptions like BM or MWI has to state them. Incidentally, IIRC Feynman, despite his problematic presentation of QM, does state that he always takes only a single measurement in any experiment (but I couldn't point you to where he said this, so this may be wrong).

Edit: There are lots of physicists in neurobiology, which is natural given the role of consciousness both subjects. Anyway, I recently heard a joke from a bunch of theorists - you know, the experimentalists - they never include us on their side of the cut - but they did for the Higgs boson, which was wonderful!
 
Last edited:
  • #138
if one real (modern) Turing machine were simulating A, and one was simulating B (the sim being a virtual reality across some history, of some n QM objects) would the flow of information and/or heat across their boundaries be different?

I can't get past the sense that B would explode, or become a black hole. Isn't collapse a thermodynamic process? Interference information is lost or selection information is added (depending on how you look at it) If one machine (A) can manage equilibrium by using collapse to trace out a state and discard the information describing the probability wave - and all the superpositions, but B can't... Or are both supposedly able to?
 
Last edited:
  • #139
stevendaryl said:
I know it's not really in a definite state of [itex]s_z[/itex], but I don't see how it makes sense to consider the experiment a "filtering" experiment, if the atoms don't have a definite spin state.
Sorry, what do you mean with "filtering experiment"? That it separates two different states from a mixed state?

--
lightarrow
 
  • #140
In in case my question about the thermodynamics of QM measurement is was too poorly worded...
http://arxiv.org/abs/quant-ph/0605031

"The point of this brief paper is to show that if proposals [2-6] that the measurement process results from non-linear decoherence processes which violate CPT symmetry [7] turn out to be correct, then the macroscopic behavior described by the second law would follow almost trivially as a consequence."Irreversibility in Collapse-Free Quantum Dynamics and the Second Law of Thermodynamics
M. B. Weissman
(Submitted on 2 May 2006)
Proposals to solve the problems of quantum measurement via non-linear CPT-violating modifications of quantum dynamics are argued to provide a possible fundamental explanation for the irreversibility of statistical mechanics as well. The argument is expressed in terms of collapse-free accounts. The reverse picture, in which statistical irreversibility generates quantum irreversibility, is argued to be less satisfactory because it leaves the Born probability rule unexplained.
 

Similar threads

Replies
23
Views
2K
Replies
20
Views
2K
Replies
8
Views
1K
Replies
1
Views
629
Replies
1
Views
961
Replies
16
Views
1K
Replies
18
Views
2K
  • Quantum Physics
3
Replies
71
Views
4K
Replies
4
Views
844
  • Quantum Physics
Replies
4
Views
1K
Back
Top