- 24,488
- 15,057
Yep, but contrary to the former the latter makes physical sense!atyy said:All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".
Yep, but contrary to the former the latter makes physical sense!atyy said:All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".
It's not. In some cases (superfluids, superconductors, laser beams) macroscopic objects can behave quantum mechanically, in the sense of having macroscopic quantum coherence.stevendaryl said:It's the same cut.
Sure, coarse-graining and decoherence is the answer. What else do you need to understand why macroscopic objects are well described by classical physics? Note that this is a very different interpretation from the quantu-classical cut (imho erroneously) postulated in Bohr's version of the Copenhagen interpretation.stevendaryl said:Yes, that's what I said was the essence of the measurement problem.
Hmm. It seems to me that you've said exactly what the measurement problem is. If you have a system that is in a superposition of two states, and you amplify it so that the differences become macroscopic, why doesn't that lead to a macroscopic system in a superposition of two states? Why aren't there macroscopic superpositions?
It seems to me that there are only two possible answers:
People sometimes act as if decoherence is the answer, but it's really not the complete answer. Decoherence is a mechanism by which a superposition involving a small subsystem can quickly spread to "infect" the rest of the universe. It does not solve the problem of why there are definite outcomes.
- There are no macroscopic superpositions. In that case, the problem would be how to explain why not.
- There are macroscopic superpositions. In that case, the problem would be to explain why they're unobservable, and what the meaning of Born probabilities are if there are no choices made among possibilities.
2.
Are you sure? See my post #202 above!vanhees71 said:Yep, but contrary to the former the latter makes physical sense!
That's a important point but not against my interpretation. To the contrary, it shows that there is no general "quantum-classical part". Superfluidity and superconductivity are nice examples showing that you have to be careful to take all relevant macroscopic observables into account, i.e., you shouldn't somehow 'coarse-grain away" relevant quantum effects.Demystifier said:Are you sure? See my post #202 above!
So how to know in general where to put the micro/macro cut? The size of the system is obviously not a good criterion. Would you agree that the best criterion is nonexistence/existence of substantial decoherence? If so, should we better call it coherence/decoherence cut?vanhees71 said:That's a important point but not against my interpretation. To the contrary, it shows that there is no general "quantum-classical part". Superfluidity and superconductivity are nice examples showing that you have to be careful to take all relevant macroscopic observables into account, i.e., you shouldn't somehow 'coarse-grain away" relevant quantum effects.
vanhees71 said:Sure, coarse-graining and decoherence is the answer.
Demystifier said:So how to know in general where to put the micro/macro cut? The size of the system is obviously not a good criterion. Would you agree that the best criterion is nonexistence/existence of substantial decoherence? If so, should we better call it coherence/decoherence cut?
vanhees71 said:Since when are macroscopic coarse-grained observables described by a state vector or a density matrix? It's an effective classical description of averages.
vanhees71 said:Since when are macroscopic coarse-grained observables described by a state vector or a density matrix? It's an effective classical description of averages.
stevendaryl said:I'm saying that IF you wanted to treat a macroscopic system using quantum mechanics, one would have to use density matrices. You can certainly just pretend that you have a classical system. That's the sense in which the measurement problem is solved: there is a way to pretend that it is solved.
The latter supports the position of vanhees71 without having to resolve anything about superpositions or ignorance. No pretense is involved.A. Neumaier said:As I had said before, people working in statistical mechanics do not use the eigenvalue-eigenstate link to measurement but the postulates that I had formulated (though they are not explicit about these). This is enough to get a unique macroscopic measurement result (within experimental error).
A. Neumaier said:The latter supports the position of vanhees71 without having to resolve anything about superpositions or ignorance. No pretense is involved.
stevendaryl said:I don't agree. Treating quantum uncertainty as if it were thermal noise is pretense.
This doesn't follow from anything but is a fundamental postulate, called Born's rule. Weinberg gives quite convincing arguments that it cannot be derived from the other postulates. So it's part of the "axiomatic setup" of the theory. In this sense there is no problem, because in physics the basic postulates are anyway subject to empirical testing and cannot be justified otherwise than be their empirical success!stevendaryl said:As I said in another post: suppose we have a set up such that:
Then the standard quantum "recipe" tells us:
- An electron with spin up will trigger a detector to go into one "pointer state", called "UP".
- An electron with spin down will trigger a detector to go into a macroscopically different pointer state, called "DOWN".
If you claim that this conclusion follows from pure unitary evolution of the wave function, I think you're fooling yourself. But if it doesn't follow from unitary evolution, then it seems to me that you're proposing an extra process in quantum mechanics, whereby a definite result is selected out of a number of possibilities according to the Born rule. That's fine: there is no reason to assume that there is only one kind of process in nature. But if you're proposing this extra process, then to me, you have a measurement problem. Why does this process apply to large, macroscopic systems, but not to small systems such as single electrons or single atoms?
- An electron in the state \alpha |up\rangle + \beta |down\rangle will cause the detector to either go into state "UP" with probability |\alpha|^2 or into state "DOWN" with probability |\beta|^2
There are no pointer states called UP or DOWN. The pointer is a macroscopic object, and the measurement result is that some macroscopic expectation (of mass density of the pointer) is large in a neighborhood of the mark called UP and zero in a neighborhood of the mark called DOWN, or conversely. To model this by a quantum state UP amounts to blinding oneself to macroscopic reality. In terms of quantum mechanics, there are an astronomical number of microstates (of size of the minimal uncertainty) that make up either UP or DOWN, and even more that make up neither UP nor DOWN (since the pointer moves continuously and takes time to make the measurement). It is no surprise that reducing this realistic situation to a simple black-and-white situation with only two quantum states leads to interpretation problems. This is due to the oversimplification of the measurement process. To quote Einstein: Everything should be modeled as simply as possible but not simpler.stevendaryl said:As I said in another post: suppose we have a set up such that:
- An electron with spin up will trigger a detector to go into one "pointer state", called "UP".
- An electron with spin down will trigger a detector to go into a macroscopically different pointer state, called "DOWN".
A. Neumaier said:there are no pointer states called UP or DOWN.
vanhees71 said:This doesn't follow from anything but is a fundamental postulate, called Born's rule. Weinberg gives quite convincing arguments that it cannot be derived from the other postulates. So it's part of the "axiomatic setup" of the theory. In this sense there is no problem, because in physics the basic postulates are anyway subject to empirical testing and cannot be justified otherwise than be their empirical success!
It is not intended to do that, but:stevendaryl said:Putting it in bold face doesn't make it more true.
Physics Forums Global Guidelines said:When replying in an existing topic it is fine to use CAPS or bold to highlight main points.
But this is not a pointer state but the electron state. "there is a dark spot on the upper plate" is a large collection of possible microstates!stevendaryl said:In a Stern-Gerlach type experiment, an electron is either deflected upward, where it collides with a photographic plate making a dark spot on the upper plate. Or it is deflected downward, where it collides with a photographic plate making a dark spot on the lower plate. So I'm using the word "UP" to mean "there is a dark spot on the upper plate" and the word "DOWN" to mean "there is a dark spot on the lower plate".
I agree with everything you said, except for the last sentence. Nothing collapsed here. You just get a FAPP irreversible result due to the dissipative process resulting in a macroscopic mark of the electron on the photoplate. Usually, it's impossible to say anything definitive about the fate of the poor electron hitting the plate, because it's absorbed. You cannot say that it is described by the state ##|\text{up} \rangle## when hitting a place in the "up region".A. Neumaier said:It is not intended to do that, but:But this is not a pointer state but the electron state. "there is a dark spot on the upper plate" is a large collection of possible microstates!
Before reaching the screen, the electron is in the superposition you describe, and the system of electron plus detector is in a state described by a tensor product of a pure state and a density matrix for the screen. This system undergoes (because of decoherence through the rest of the universe) a dissipative, stochastic dynamics that results in a new state described by a density matrix of the combined system of electron plus detector, in which the expectation of the integral of some field density over one of the two screen spots at the end of the electron beams changes in a macroscopically visible way. We observe this change of the expectation and say ''The electron collapsed to state ''up'' or ''down'' depending on which spot changed macroscopically.
I said we say "collapsed", and describe with ''we'' current practice - one can find this phrase in many places. Even though, of course the collapse is not needed on the level of the many-particle description but only in the approximate reduced description. What one can say depends on the nature of the screen. If it is a bubble chamber one can see a track traced out by the electron. If it is a photographic plate it will probably be part of a bound state of the detector.vanhees71 said:I agree with everything you said, except for the last sentence. Nothing collapsed here. You just get a FAPP irreversible result due to the dissipative process resulting in a macroscopic mark of the electron on the photoplate. Usually, it's impossible to say anything definitive about the fate of the poor electron hitting the plate, because it's absorbed.
Yes, I agree. One shouldn't use this formulation, though it is used a lot.vanhees71 said:You cannot say that it is described by the state |up⟩ when hitting a place in the "up region".
vanhees71 said:Sure, coarse-graining and decoherence is the answer. What else do you need to understand why macroscopic objects are well described by classical physics? Note that this is a very different interpretation from the quantu-classical cut (imho erroneously) postulated in Bohr's version of the Copenhagen interpretation.
Note again that there are no definite outcomes but only approximately definitive outcomes for the coarse-grained macroscopic quantities.
This needs no postulates. Coarse-graining means removing precisely those features that oscillate too fast in space or time to be relevant for the macroscopic averages. What this is depends on the problem at hand but is an objective property of the microscopic model. And in many cases it is known. Correct coarse-graining is revealed by the fact that the memory kernel decays exponentially and sufficiently fast, which is the case only if exactly the right macroscopic set of variables is retained.atyy said:But you need to introduce one more postulate to decide what to coarse grain, ie. where do you put the cut to decide what is macroscopic and must be coarse grained.
I like this answer a lot. But what frequency qualifies as 'oscillating too fast' ?A. Neumaier said:This needs no postulates. Coarse-graining means removing precisely those features that oscillate too fast in space or time to be relevant for the macroscopic averages. What this is depends on the problem at hand but is an objective property of the microscopic model. And in many cases it is known. Correct coarse-graining is revealed by the fact that the memory kernel decays exponentially and sufficiently fast, which is the case only if exactly the right macroscopic set of variables is retained.
This depends on the accuracy and generality with which you want your model to be accurate.Mentz114 said:I like this answer a lot. But what frequency qualifies as 'oscillating too fast' ?
vanhees71 said:At this point in the development of physics, from a physics point of view we have to live with the observation that quantum theory, including Born's rule of probabilistic interpretation of the quantum state, describes nature quite comprehensively. As long as one doesn't find an even better theory, there won't be solutions for your philosophical quibbles!
vanhees71 said:I agree with everything you said, except for the last sentence. Nothing collapsed here.
stevendaryl said:I understand the point that an actual macroscopic outcome, such as a blackened spot on a photographic plate, involves countless numbers of particles, so it is completely impossible for us to describe such a thing using quantum mechanics. But the honest answer is that the problem of how definite results emerge is just unsolved. You don't know. That's fine. But it seems wrong to me to pretend otherwise.
Thank you! I will read it.vanhees71 said:For a fully quantum theoretical description of the Stern-Gerlach experiment, see
http://arxiv.org/abs/quant-ph/0409206
It also shows that you can only come close to an idealized SG experiment as it is discussed in introductory chapters of many QT books. I think, the SG experiment is great to be treated at several stages of the QT course, showing on a not too complicated example that can be treated almost exactly (although some numerics is necessary as indicated in the paper) as soon as the full description by the Pauli equation is available.
But this only treats what happens during the flight, not what happens when the spinning particles reach the detector - namely that exactly one spot signals the presence of the particle. Thus it is not directly relevant to the problem discussed here.vanhees71 said:For a fully quantum theoretical description of the Stern-Gerlach experiment, see
http://arxiv.org/abs/quant-ph/0409206
A. Neumaier said:Standard statistical mechanics implies dissipative deterministic or stochastic classical dynamics for coarse-grained variables in appropriate models. Even though the Stern-Gerlach experiment may not have been treated in this way there is no doubt that the deterministic (and dissipative) Navier-Stokes equations for classical hydromechanics follow from quantum statistical mechanics in a suitable approximation. This is done completely independent of observers and without any measurement problem, just with an interpretation according to my post #212 rather than the collapse interpretation. Thus one does not have to solve a selection problem to obtain definite results from quantum mechanics.
Combining this knowledge what we know from how detectors work it is easy to guess that the total picture is indeed the one painted by vanhees71 and myself, even though details are a lot more complex than appropriate for PF. Concerning statistical mechanics and measurement, have your read the following papers? (I had mentioned the first of them in an earlier thread.)
Understanding quantum measurement from the solution of dynamical models
Authors: Armen E. Allahverdyan, Roger Balian, Theo M. Nieuwenhuizen
http://arxiv.org/abs/1107.2138
Lectures on dynamical models for quantum measurements
Authors: Theo M. Nieuwenhuizen, Marti Perarnau-Llobet, Roger Balian
http://arxiv.org/abs/1406.5178
There are many more articles that deal with suitable model settings...
Having had a long look at the Nieuwenhuizen et al.(1014) treatment I find support for the idea that nature has no cut off/transition between quantum and classical. Quantum mechanics is always in operation - there is only one set of laws. So why do we not see 'cat' states ? At what point can we use classical approximations instead of QM ?stevendaryl said:Wow. It seems to me that what you're saying is just contrary to fact. There are two possible outcomes of the experiment: Either the upper plate has a black dot, or the lower plate has a black dot. You do the experiment, and only one of those possibilities becomes actual. That's what collapse means.
When you have a small system, involving a small number of particles, the superposition principle holds: If you have two possible states, then the superposition of the two is another possible state. Is there some maximal size for which the superposition holds? Is there some maximum number of particles for which it holds? I understand the point that an actual macroscopic outcome, such as a blackened spot on a photographic plate, involves countless numbers of particles, so it is completely impossible for us to describe such a thing using quantum mechanics. But the honest answer is that the problem of how definite results emerge is just unsolved. You don't know. That's fine. But it seems wrong to me to pretend otherwise.
All wonderful books - those by Dirac, von Neumann, Messiah, Landau and Lifshitz, Ballentine, Peres, etc. - are inadequate, terrible and handwavy in this respect! Peres is still the best of them all regarding foundations, and presents, carefully avoiding collapse, the ensemble interpretation with measurement in terms of POVMs instead of eigenvalues and eigenstates.atyy said:I have to think more about the work from Allahverdyan, Balian, and Nieuwenhuizen. I came across it several years ago when someone posted it on PF. I think their approach is very interesting and worth studying. However, I think it also shows how inadequate the terrible book of Ballentine's is, and even how handwavy the wonderful book of Peres's is. Neither book comes close to supplying the non-trivial considerations that Allahverdyan and colleagues present
The measurement problem appears here in the form that if we place the screen only at the left part of the beam and shoot single electrons from the source, then the right part of the beam (which continues to exist at later times) contains the electron precisely when nothing is measured in the left part. This needs explanation, and is not covered by the analysis of the Stern-Gerlach setting without screen interaction.vanhees71 said:Well, at the detector it gets absorbed and leaves a trace there. That's why we use it as a detector. Nobody asks how to measure a trajectory in Newtonian mechanics. So why are you asking, how the atom leaves a trace on a photoplate? I guess, one could try to do a complicated quantum mechanical evaluation of the chemical reaction of the atom with the molecules in the photoplate, but what has this to do with the quantum theory of the atom in the inhomogeneous magnetic field of the SG apparatus?
A. Neumaier said:For me, the real message of the Allahverdyan et al. paper - and the fact that it is 160 pages long! - is that foundations should be completely free of measurment issues since the latter can be treated fully adequately only by fairly complex statistical mechanics. This is why I recommend alternative foundations based upon the postulates (EX) and (SM) that I had formulated. They apply to measuring both macroscopic variables (as expectations with error bars) and pure eigenstates of an operator ##A## with eigenvalue ##\alpha## (where ##\bar A=\alpha## and ##\sigma_A=0##, capture far better the quantum mechnaical practice, and are much easier to state than Born's rule, especially if one compares it with the complicated form of Born's rule needed in the applications. Born's rule is derivable from these postulates in the special cases where it fully applies. See Section 10.5 of http://arxiv.org/abs/0810.1019. .
I don't understand you. Please explain what exactly the hidden variable are in their treatment. Or do you only talk in an as if manner - that what they do is analogous to hidden variables?atyy said:I suspect the introduction of sub-ensembles by Allahverdyan, Balian, and Nieuwenhuizen is the same (in spirit, even if not technically) as introducing Bohmian hidden variables - since the point of the hidden variables is to pick out a unique set of sub-ensembles.
A. Neumaier said:I don't understand you. Please explain what exactly the hidden variable are in their treatment. Or do you only talk in an as if manner - that what they do is analogous to hidden variables?
vanhees71 said:In the SG experiment with the right setup (see the paper I cited yesterday) you have entanglement between position and the spin-z component. If you block the partial beam with spin-z down, you are left with a beam with spin-z up. It may be a philosophical problem in how you come to sort out one beam. It's like choosing a red marble rather than a blue just because you like to choose the red one. What's the problem?
Of course the setup of the preparation and measurement leads to the choice which (sub-)ensemble I meausure. I don't know, why I should do a very complicated calculation to explain, why an atom gets stuck in some material to filter out the unwanted spin state in an SG experiment. Experience tells us, how to block particles with matter. For this purpose it's enough. For other it's not, and then you can think deeper. E.g., if you want to use energy loss, dE/dx, for particle ID you better have an idea how it works and you read about the Bethe-Bloch formula and how it is derived, but there really is no principle problem from the point of view of theoretical and experimental physics.
I am just trying to explain why stevendaryl is not satisfied with your answers.vanhees71 said:If you block the partial beam with spin-z down, you are left with a beam with spin-z up. It may be a philosophical problem in how you come to sort out one beam. It's like choosing a red marble rather than a blue just because you like to choose the red one. What's the problem?
Every experimenter can. The location of the beam is determined by the experimental setting. Thus it is known beforehand whether the experimental setting blocks the left beam.The only question is when and whether a spinning particle actually travels in it.atyy said:You cannot block the beam in real space, if the beam is only in Hilbert space.
I don't introduce hidden variables, but a very visible "beam dump". That can be a big rock or some lead shield or whatever. It's all but hidden! SCNR.atyy said:If you block the beam, you are introducing hidden variables. You cannot block the beam in real space, if the beam is only in Hilbert space.
But that's my point! You overcomplicate a simple thing like putting a "beam dump" somewhere. Of course, it's not in the superposition if one partial beam is just absorbed by the beam dump. That's just a wrong description of the state after the partial beam hit the beam dump. That's it, but no complicated problems.A. Neumaier said:I am just trying to explain why stevendaryl is not satisfied with your answers.
The problem is not in that there is a choice in blocking one of the beams. That the beam is blocked may be taken as part of the experimental set-up. The problem is that if one treats this problem quantum mechanically including the blocker and the detector, one apparently ends up (and does so definitely in the oversimplified version used by stevendaryl) in the superposition I wrote down, rather than in one of the two separable states (as observed). So something needs to be explained!
But if one uses the complicated description one should still be able to obtain the same final result. That's the whole point of deriving few-particle quantum mechanics from a more comprehensive view in which the equipment is also treated by quantum mechanics. Consistency of QM requires that the final results are the same (within approximation errors), but (according to stevendaryl's arguments) one seemingly gets something essentially different when using the more detailed description.vanhees71 said:But that's my point! You overcomplicate a simple thing like putting a "beam dump" somewhere. Of course, it's not in the superposition if one partial beam is just absorbed by the beam dump. That's just a wrong description of the state after the partial beam hit the beam dump. That's it, but no complicated problems.