The typical and the exceptional in physics

Click For Summary
SUMMARY

The discussion centers on the implications of quantum mechanics for macroscopic objects, emphasizing that while quantum mechanics allows for superpositions, practical physics often relies on approximations that focus on typical behaviors rather than exceptional cases. The participants reference Theorem 9.3.3 from statistical mechanics, asserting that macroscopic properties are better described by mixed states rather than pure states. They argue that the standard deviation of macroscopic observables is typically small due to the nature of statistical mechanics, which excludes exceptional phenomena. The conversation highlights the necessity of simplifications in physics to yield useful predictions.

PREREQUISITES
  • Understanding of quantum mechanics, particularly superposition and wave function evolution.
  • Familiarity with statistical mechanics and the concept of mixed states.
  • Knowledge of the significance of standard deviation in physical systems.
  • Awareness of Theorem 9.3.3 in statistical mechanics and its implications.
NEXT STEPS
  • Study Theorem 9.3.3 in detail to understand its application in statistical mechanics.
  • Explore the concept of mixed states and their role in quantum mechanics.
  • Research the implications of superposition in macroscopic systems and its experimental challenges.
  • Investigate the relationship between microstates and macrostates in statistical mechanics.
USEFUL FOR

Physicists, quantum mechanics researchers, and students of statistical mechanics seeking to deepen their understanding of the relationship between quantum behavior and macroscopic properties.

A. Neumaier
Science Advisor
Insights Author
Messages
8,715
Reaction score
4,814
stevendaryl said:
Yes. If there were actually a proof that the laws of quantum mechanics implies that macroscopic objects have negligible standard deviation in their position, then there wouldn't be a measurement problem.
For properly normalized extensive macroscopic properties (and this includes the center of mass operator), there is such a proof in many treatises of statistical mechanics. It is the quantum analogue of the system size expansion for classical stochastic processes. For example, see Theorem 9.3.3 and the subsequent discussion in my online book. But you can find similar statements in all books on stochastic physics where correlations are discussed in a thermodynamic context if you care to look, though usually for different, thermodynamically relevant variables. [more on this here]
stevendaryl said:
There is nothing in quantum mechanics that bounds the standard deviation of a variable such as position. A single electron can be in a superposition of being here, and being 1000 miles away. A single atom can be in such a superposition. A single molecule can be in such a superposition. There is nothing in quantum mechanics that says that a macroscopic object can't be in such a superposition.

Indeed. But without simplifying assumptions one can never do anything in physics. Successful science and hence successful physics lives from concentrating on the typical, not on the too exceptional. No physicist ever considers (except in thought experiments) a system where a single electron is in a superposition of being here and 1000 miles away. It is completely uninteresting from the point of view of applications.Everywhere in physics one makes approximations which (in view of the inherent nonlinearity and chaoticity of the dynamics of physical systems when expressed in observable terms) exclude situations that are too exceptional. This is the reason why randomness is introduced in classical physics, and it is the reason why randomness appears in quantum physics. It is a consequence of approximations necessary to be able to get useful and predictable results.

It is the same with statistical mechanics. Predictiions in statistical mechanics exclude all phenomena that require special efforts to prepare.

For example, irreversibility is the typical situation, and this is heavily exploited everywhere in physics. But by taking special care one can devise experiments such as spin echos where one can see that the irreversibity assumption can be unwarranted.

Similarly, it takes a lot of effort to prepare experiments where nonlocal effects are convincingly demonstrated - the typical situation is that nonlocal correlations die out extremely fast and can be ignored. As everywhere in physics if you want to observe the untypical you need to make special efforts. These may be valuable but they don't take anything away from the fact that under usual circumstancs these effects do not occur.

If you want to have statements that are valid without exceptions you need to do mathematics, not physics.Mathematical arguments do not allow exceptions (or make statements of their very low probability).
 
  • Like
Likes   Reactions: Demystifier
Physics news on Phys.org
A. Neumaier said:
For properly normalized extensive macroscopic properties (and this includes the center of mass operator), there is such a proof in many treatises of statistical mechanics. It is the quantum analogue of the system size expansion for classical stochastic processes. For example, see Theorem 9.3.3 and the subsequent discussion in my online book. But you can find similar statements in all books on stochastic physics where correlations are discussed in a thermodynamic context if you care to look, though usually for different, thermodynamically relevant variables. [more on this here]

I think those are missing the point. I'm making a point just about the linearity of quantum evolution. If you have one state, |\psi_1\rangle corresponding to an object being localized at location A, and another state, |\psi_2\rangle corresponding to an object being localized at location B, and A and B, then there is a third state, |\psi\rangle = \alpha |\psi_1\rangle + \beta |\psi_2\rangle with a significant standard deviation for the position of the object. If you furthermore assume that A and B are separated by a potential barrier, then quantum mechanics has no mechanism that would tend to reduce that standard deviation through its evolution equations.
 
stevendaryl said:
I think those are missing the point. I'm making a point just about the linearity of quantum evolution. If you have one state, |\psi_1\rangle corresponding to an object being localized at location A, and another state, |\psi_2\rangle corresponding to an object being localized at location B, and A and B, then there is a third state, |\psi\rangle = \alpha |\psi_1\rangle + \beta |\psi_2\rangle with a significant standard deviation for the position of the object. If you furthermore assume that A and B are separated by a potential barrier, then quantum mechanics has no mechanism that would tend to reduce that standard deviation through its evolution equations.
As the overwhelming success of statistical mechanics shows, macroscopic objects are correctly described by mixed states with a density operator of the form ##E^{-S/k_B}## and a suitable operator ##S## that depends on how detailed the observables of interest are. There is no superposition principle for such states!

The superposition principle that you invoke is only a feature of pure states. But pure states are the exception in Nature - they exist only for systems with very few discrete degrees of freedom, and approximately for systems with few continuous degrees of freedom, for systems at temperatures very close to absolute zero, and for purely electronic systems at temperatures where the excited states are not yet significantly populated.
 
A. Neumaier said:
As the overwhelming success of statistical mechanics shows, macroscopic objects are correctly described by mixed states with a density operator of the form ##E^{-S/k_B}## and a suitable operator ##S## that depends on how detailed the observables of interest are. There is no superposition principle for such states!

Yes, I understand that it's possible to sweep the problems under the rug, and ignore them, but it's not an intellectually satisfying thing to do. Using the type of mixed state that you are talking about is already assuming the conclusion. You can't describe something as statistical perturbations around a central median value unless you already know that it has a small standard deviation. You can't prove that the standard deviation is small by using that representation--that's circular.

Yes, I know that you can justify it empirically--it works. Empirically, macroscopic objects have state variables with small standard deviations. I agree that that's an empirical fact, but I'm disagreeing that it is explained by smooth evolution of the wave function. And it certainly isn't explained (in a noncircular way) by your assuming it to be true.
 
  • Like
Likes   Reactions: RockyMarciano and Jimster41
If I take the point of view that pure states don't make sense for macroscopic objects, what's the significance of microstates in statistical mechanics in general? For example, a very common notion of the entropy is how many microstates are compatible with a given macrostate. If the microstates don't represent the actual states which the macroscopic system can occupy, how does this make sense?
 
kith said:
If I take the point of view that pure states don't make sense for macroscopic objects, what's the significance of microstates in statistical mechanics in general? For example, a very common notion of the entropy is how many microstates are compatible with a given macrostate. If the microstates don't represent the actual states which the macroscopic system can occupy, how does this make sense?
Microstates are an artifice that relates the quantum mechanical entropy to the information theoretical entropy. This is a powerful analogy,, but cannot be taken literally.

Microstates never represent the actual states since they are defined to be eigenstates of the Hamiltonian. These are time invariant, hence if the actual state were one of these it would be this state for all times, and all expectations computed would come out utterly wrong.
 
stevendaryl said:
Yes, I understand that it's possible to sweep the problems under the rug, and ignore them, but it's not an intellectually satisfying thing to do. Using the type of mixed state that you are talking about is already assuming the conclusion. You can't describe something as statistical perturbations around a central median value unless you already know that it has a small standard deviation. You can't prove that the standard deviation is small by using that representation--that's circular.

As I explained in the other thread, nothing is circular; I am just using improved foundations. The foundations must always assume something,
but this doesn't make it circular. As everywhere in foundations, one simply picks from the many available facts a few that aresimple and easy to motivate, in such a way that everything of interest can be derived from it.

Assuming the form ##\rho=e^{-S/k_B}## is a very weak assumption that by no means in itself implies that the standard deviation is small. It only excludes density operators with a zero eigenvalue; nothing else. (But it excludes pure states, since these always have a zero eigenvalue.) Moreover, the form is invariant under unitary evolution, since eigenvalues are preserved by the dynamics. If the state of an isolated system has this form at one time then it has this form at all times. Thus it is a very natural assumption.

In particular, for microscopic systems, assuming the form ##\rho=e^{-S/k_B}## doesn't imply anything about the size of the standard deviation.
For example, in a 2-state system, any non-pure state can be written in this form. And people analyzing the accuracy of foundational quantum experiments have to work with mixed states (without zero eigenvalues, hence of my assumed form!) since this is the only way to account for the
real behavoir of the photons and detectors involved - the pure state descriptions used in the theoretical arguments are always highly idealized.

So how can my assumption have anything to do with circular reasoning?

To conclude from my assumption that macroscopic observables have small standard deviations one needs a significant amount of additional input: The form of the macroscopic observables, the typical multiparticle form of the Hamiltonian (a huge sum of the standard 1-body plus 2-body plus perhaps 3-body potentials), and the fact that macroscopic objects are defined as those with a huge number of particles in local equilibrium. This input is valid only for macroscopic systems, and deriving from it a small standard derivation is still nontrivial work.
 
A. Neumaier said:
Microstates are an artifice that relates the quantum mechanical entropy to the information theoretical entropy. This is a powerful analogy,, but cannot be taken literally.

Microstates never represent the actual states since they are defined to be eigenstates of the Hamiltonian. These are time invariant, hence if the actual state were one of these it would be this state for all times, and all expectations computed would come out utterly wrong.

A different point of view is that although the microstates of the canonical ensemble may be an artifice, it could still make sense to assign a pure state to a macroscopic object, eg. https://arxiv.org/abs/1302.3138.
 
A. Neumaier said:
Microstates are an artifice that relates the quantum mechanical entropy to the information theoretical entropy. This is a powerful analogy,, but cannot be taken literally.
So would you say that also in classical statistical mechanics, a microstate which is characterized by the positions and momenta of all the particles of a macroscopic object is only a calculation tool? And if a physicist in the classical era knew some macroscopic properties, he shouldn't have pictured the object to be in a certain unknown microstate?
 
  • #10
A. Neumaier said:
Microstates never represent the actual states since they are defined to be eigenstates of the Hamiltonian. These are time invariant, hence if the actual state were one of these it would be this state for all times, and all expectations computed would come out utterly wrong.
Why are the only states of quantum objects allowed to be eigenstates of the Hamiltonian? This doesn't make sense to me, and I guess it's not what you wanted to say. This is the conclusion drawn by many students after hearing QM1, because the professor used to only let them solve the time-independent Schrödinger equation ;-).
 
  • #11
atyy said:
it could still make sense to assign a pure state to a macroscopic object
Every mixed state in a given Hilbert space is a pure state in a different Hilbert space, namely the Hilbert space of Hermitian trace-class operators with the trace inner product. But in this Hilbert space, the superposition principle is not valid, as not every pure state in this space is realized as a mixed state in the original Hilbert space.

However, the paper you cited employs a different construction. This construction is very artificial in that it depends on random numbers and doesn't give the correct state for any N but only in the thermodynamic limit. It therefore cannot be used in systems that consist of a microscopic system and a macroscopic system, as needed for the measurement process. It is also very restricted in scope as it cannot account for macroscopic nonequilibrium systems, which is the most typical macroscopic situation.
 
  • #12
kith said:
So would you say that also in classical statistical mechanics, a microstate which is characterized by the positions and momenta of all the particles of a macroscopic object is only a calculation tool? And if a physicist in the classical era knew some macroscopic properties, he shouldn't have pictured the object to be in a certain unknown microstate?
Yes, since it ignores the identity of particles, which must be introduced by hand to get the physically correct statistics. Gibbs, who solved the issue in this way, was well aware of the limitations. He modeled the single system by an ensemble, well knowing that he considered fictitious objects in the ensemble whose average he was taking. The correct macroscopic properties appear only in the mixed state, not in the single microstate.
 
  • #13
vanhees71 said:
Why are the only states of quantum objects allowed to be eigenstates of the Hamiltonian? This doesn't make sense to me, and I guess it's not what you wanted to say. This is the conclusion drawn by many students after hearing QM1, because the professor used to only let them solve the time-independent Schrödinger equation ;-).
I was only referring to the microstates used to represent the entropy by a counting formula, as kith had asked for. One cannot count arbitrary pure states (of which there are uncountably many), only eigenstates. The alternative is to count cells in phase space, but it is obvious that the division into cells is an artifice, too.
 
Last edited:
  • #14
The (micro-canonical) entropy, given the state is described by the statistical operator ##\hat{\rho}##, is
$$S=-\mathrm{Tr}(\hat{\rho} \ln \hat{\rho}).$$
The notion of "phase-space cells" is already a coarse grained concept in the spirit of the Boltzmann equation, where you consider a dilute gas. You take a macroscopically small but microscopically large volume (for simplicity a cube of length ##L##) and place it somewhere into the gas. To make sense of a "phase-space" cell a momentum operator should exist, and thus you assume the wave functions as having periodic boundary conditions. Then in a macroscopically small but microscopically large momentum-space volume element you have $$\mathrm{d}^3 \vec{p} \mathrm{d}^3 \vec{x}/(2 \pi \hbar)^3$$ (with ##\mathrm{d}^3 \vec{x}=L^3##). That introduces the phase-space measure for classical statistics, where it is missing in lack of a "natural unit" of action, which is provided in QT by Planck's constant ##\hbar##.
 
  • #15
I'm afraid that I must disagree completely with the claims being made in this thread. They are false. (Well, I should say that they are false if there is no physical collapse of the wave function, there is only unitary evolution of the wave function.)

Suppose you have two systems interacting. For simplicity, let's assume that one of those systems is extremely simple, and its Hilbert space has a two-element basis, |u\rangle and |d\rangle. Without specifying in detail the other system or the interaction between the two, let's suppose that the interaction between the two works in the following way:
  • If the composite system is placed initially in the state |u\rangle \otimes |start\rangle, then it will almost surely evolve into the state |u\rangle |U\rangle.
  • If the composite system is placed initially in the state |d\rangle \otimes |start\rangle, then it will almost surely evolve into the state |d\rangle |D\rangle.
Then according to quantum mechanics, if the composite system is placed initially in the state \frac{1}{\sqrt{2}} |u\rangle |start\rangle + \frac{1}{\sqrt{2}} |d\rangle |start\rangle, then it will evolve into the state \frac{1}{\sqrt{2}} |u\rangle |U\rangle + \frac{1}{\sqrt{2}} |d\rangle |D\rangle. If states |U\rangle and |D\rangle correspond to macroscopically different values for some state variable, then that state variable will not have a small standard deviation.

I would think that this is beyond question. Quantum evolution for pure states is linear. Now, you can object that if the second system is supposed to be a macroscopic measurement device, then we can't talk about pure states. I really do consider that to be an obfuscating objection, rather than a clarifying one. You can do the same analysis using density matrices, rather than pure states. The conclusion will be the same---the standard deviation of the state variable for the measuring device will not remain small.
 
  • Like
Likes   Reactions: eloheim and RockyMarciano
  • #16
A. Neumaier said:
As I explained in the other thread, nothing is circular; I am just using improved foundations. The foundations must always assume something, but this doesn't make it circular.

Your reasoning is incorrect, whether it should be called circular or not. Let me spell out a scenario that I think illustrates the problem.

Consider a system with three parts:
  1. A source of electrons.
  2. A filter that only passes electrons that are spin-up in the x-direction.
  3. A detector that measures the z-component of the spins of the electrons.
To be picturesque, let's assume that the detector has an actual pointer, an arrow that swings to the left to indicate a spin-up electron has been detected, and swings to the right to indicate a spin-down electron.

The recipe for applying quantum mechanics that comes to us from the Copenhagen interpretation would say that the detector will in such a setup either end up pointing left, with probability 1/2, or pointing right, with probability 1/2.

The Many-Worlds interpretation would say that, if we treat the whole setup quantum-mechanically, we end up with a superposition of two "possible worlds", one of which consists of the arrow pointing left, and the other consisting of an arrow pointing right.

Both of these interpretations have their problems, but I can sort of understand them. You, on the other hand, seem to be claiming that pure unitary evolution leads not to two possibilities, the arrow pointing to the left, or the arrow pointing to the right. You seem to be claiming that unitary evolution will lead to just one of those possibilities. I think that's an astounding claim. I actually think that it's provably wrong, but alas, I'm not a good enough theoretician to prove it. But I think it contradicts everything that is known about quantum mechanics.
 
  • Like
Likes   Reactions: eloheim
  • #17
stevendaryl said:
You, on the other hand, seem to be claiming that pure unitary evolution leads not to two possibilities, the arrow pointing to the left, or the arrow pointing to the right. You seem to be claiming that unitary evolution will lead to just one of those possibilities. I think that's an astounding claim. I actually think that it's provably wrong, but alas, I'm not a good enough theoretician to prove it. But I think it contradicts everything that is known about quantum mechanics.

What occurs to me is that if you are correct, then that is tantamount to making the claim that Many-Worlds would actually only lead to one world. That's an astounding claim, and I don't think that it's a mainstream result.
 
  • #18
stevendaryl said:
You seem to be claiming that unitary evolution will lead to just one of those possibilities.
No. Since the system is not isolated there is no unitary evolution. Unitary evolution applies only to systems that are completely isolated, and there is only a single such system, the whole universe. Once one acknowledges that a small system (such as the 3-part system your describe) is necessarily an open system, and there is no doubt about that, arguing with unitary evolution of the system is valid only for times far below the already extremely short decoherence time.

Accepting the openness means having to use dissipative quantum mechanics, and there the notion of a pure state ceases to make sense. Instead one must work with density operators where talking about superpositions is meaningless. The density operators encode in case of your setting the state of an ensemble of 3-part systems (not of a single 3-part system, because one cannot prepare the macroscopic part in sufficient detail to know what happens at the violently unstable impact magnification level) the dissipative nature together with the bistability of the detector lead to a single random outcome with probabilities predicted by quantum mechanics. The pointer on each single system will have always a tiny uncertainty as predicted by statistical mechanics, but the ensemble of systems has an ensemble of pointers whose uncertainty can be arbitrarily large, since already a classical ensemble has this property.

That bistability produces a random outcome is already familiar from the supposedly deterministic classical mechanics, where an inverted pendulum suspended at the point of instability will swing to a random side which is determined by tiny, unknown details in which the model and the real pendulum differ, and tiny, unknown details of the unmodeled environment. One should not expect quantum mechanics to be better behaved.
 
  • Like
Likes   Reactions: dextercioby
  • #19
stevendaryl said:
..
..
You, on the other hand, seem to be claiming that pure unitary evolution leads not to two possibilities, the arrow pointing to the left, or the arrow pointing to the right. You seem to be claiming that unitary evolution will lead to just one of those possibilities. I think that's an astounding claim. I actually think that it's provably wrong, but alas, I'm not a good enough theoretician to prove it. But I think it contradicts everything that is known about quantum mechanics.

Unitary evolution is an abstract mathematical process by which we show how the predicted probabilities evolve in our model. Every experimental run has got a definite outcome but not through unitary evolution which can only talk about probabilities. The physics is described by dynamics not but by unitary evolution. Probability is not stuff.

I don't see your problem. Getting a particular outcome says does not conflict with any prediction of QT.
 
  • #20
Mentz114 said:
Unitary evolution is an abstract mathematical process by which we show how the predicted probabilities evolve in our model. Every experimental run has got a definite outcome but not through unitary evolution which can only talk about probabilities. The physics is described by dynamics not but by unitary evolution. Probability is not stuff.

I don't see your problem. Getting a particular outcome says does not conflict with any prediction of QT.

You're arguing a point that is orthogonal to the point of this thread. I'm not arguing that definite values is incompatible with unitary evolution, I'm arguing that unitary evolution doesn't predict definite values. An additional assumption is needed, or a lot more computation.

A. Neumaier seems to be arguing that unitary evolution leads to definite values.
 
  • #21
stevendaryl said:
You're arguing a point that is orthogonal to the point of this thread. I'm not arguing that definite values is incompatible with unitary evolution, I'm arguing that unitary evolution doesn't predict definite values. An additional assumption is needed, or a lot more computation.
That let's me off the hook, because I was actually supporting the position that unitary evolution can predict which value.
But I've changed my position for various reasons that would be speculative and off topic here.

It seems wrong to expect unitary evolution, applied mathematically to a wave function to be able to give a specific outcome, unless (as you say) a lot more information is encoded in the Hamiltonian and wave function.
There's no finite formula that can predict exactly how the dice will roll.
 
  • #22
A. Neumaier said:
No. Since the system is not isolated there is no unitary evolution. Unitary evolution applies only to systems that are completely isolated, and there is only a single such system, the whole universe. Once one acknowledges that a small system (such as the 3-part system your describe) is necessarily an open system, and there is no doubt about that, arguing with unitary evolution of the system is valid only for times far below the already extremely short decoherence time.

I think that's a red herring as far as this discussion is concerned. Whether you deal with an open system, and take decoherence into account, or consider the universe as a whole, you should get the same answer.

Accepting the openness means having to use dissipative quantum mechanics, and there the notion of a pure state ceases to make sense. Instead one must work with density operators where talking about superpositions is meaningless. The density operators encode in case of your setting the state of an ensemble of 3-part systems (not of a single 3-part system, because one cannot prepare the macroscopic part in sufficient detail to know what happens at the violently unstable impact magnification level) the dissipative nature together with the bistability of the detector lead to a single random outcome with probabilities predicted by quantum mechanics. The pointer on each single system will have always a tiny uncertainty as predicted by statistical mechanics, but the ensemble of systems has an ensemble of pointers whose uncertainty can be arbitrarily large, since already a classical ensemble has this property.

I can't make any sense of what you're saying. In the case I'm talking about, there is no actual ensemble, there is a single system, and you're using an ensemble to describe it because you don't know the details of the actual state. So you have a statistical description of a single system, and the standard deviation of the pointer position is arbitrarily large. So what do you mean by saying "each single system will have always a tiny uncertainty"? What does "each system" mean, since we only have one?

Here's what I think is going on. You have a density matrix describing the pointer positions: \rho. Now it is possible to express this density matrix as an incoherent mixture of density matrices: \rho = \sum_i p_i \rho_i, where for each i, the matrix \rho_i corresponds to a density matrix with a tiny uncertainty in the pointer position. That is certainly true, but it doesn't mean that any of those \rho_i describes your actual system. Assuming that your actual system is described by one of the \rho_i is the collapse assumption! So you are implicitly assuming wave function collapse.

The fact that a density matrix can be written as a mixture of possibilities doesn't imply that your the actual system described by the density matrix is one of those possibilities. The simplest counter-example is the density matrix for a spin degrees of freedom for a spin-1/2 particle. Look at the matrix \rho = \frac{1}{2}|u\rangle \langle u| + \frac{1}{2}|d\rangle \langle d|, where |u\rangle and |d\rangle mean spin-up and spin-down, respectively, relative to the z-axis. So if that's your density matrix, you can interpret that as:

With probability 1/2, the particle is definitely spin-up in the z-direction, and with probability 1/2, the particle is definitely spin-down in the z-direction.

But the same density matrix can also be interpreted as a 50/50 mixture of spin-up in the x-direction or spin-down in the x-direction. The fact that you can write a density matrix as a mixture of states with definite values for some variable doesn't imply that that variable has a definite value. The collapse hypothesis, as I said, really amounts to taking the density matrix that arises from tracing out environmental degrees of freedom, and reinterpreting it as expressing a proper mixture.
 
  • #23
stevendaryl said:
a statistical description of a single system
A statistical description of a single system is absurd since probabilities for a single case are operationally meaningless.

A statistical description always refers to an actual ensemble (the sample) rather than a fictitious ensemble as in Gibbs' statistical mechanics).

In a statistical description, the observables are not the individual values but the means over a sample, and from these one cannot infer the values but only probabilities.
 
  • #24
The sample (ensemble) can be averages over many microscopic degrees of freedom (observables) of the same system. That's the standard way to derive "constitutive" relations for many-body systems, e.g., the dielectric function of a dielectric, heat and electric conductivity, shear and bulk viscosity etc. etc. These are all quantities defined for macroscopic systems like a fluid (or a quasi-particle fluid). There you first have a separation of scales, namely a slow (time) and spatially large scale of relevant changes of the macroscopic local observables (like one-particle density, flow, temperature,...) and a fast (time) and spatially small scale of fluctuations of the macroscopic observables as a result of being an average over many microscopic degrees of freedom (e.g., the center of mass (or momentum in the relativsitic case) of a fluid cell as the local average over the positions of zillions of particles making up the fluid cell). If the fluctuations of these macroscopic "coarse grained" quantities are small compared to the large scale as a relevant solution for the macroscopic quantities, you very often have an effectively classical behavior, leading, e.g., for a fluid from the full quantum theoretical description (Kadanoff-Baym equations) to an effective classical or semiclassical description on various levels of sophistication like the Boltzmann equation for off-equilibrium situations or ideal or viscous hydrodynamics for systems in local or close to local thermal equilibrium. The formal method is the application of the gradient expansion of the quantum theoretical correlation functions and a cut of the corresponding BBGKY hierarchy (often at the one-particle level, leading to the usual BUU equation and from there to various hydrodynamics like classical equations). The corresponding constitutive (transport) coefficients are then derived from the statistics over many microscopic degrees of freedom on the same system (not an ensemble of systems).

If it, however, comes to quantum systems, where you do experiments where microscopic quantities become relevant and the fluctuations are large or comparable to the dynamical changes of these relevant microscopic quantities you have to do ensemble averages to get the statistics necessary to test the probabilistic prediction of QT. That's, e.g., the case for particle collisions as done at accelerators like the LHC, where you have to repeat the collisions again and again to "collect enough statistics" to learn about the interactions between the particles and their properties via a statistical evluation process of the collected data. That's why the LHC is not only the most powerful machine because it provides us with the highest man made energies of the colliding protons and heavy ions but also because of its high luminosity which provides us with enough statistics for high-precision measurements also about very "rare events".

From the theoretical side, all we have in terms of QT are probabilistic predictions, and whether or not QT is "complete" in the sense that there is maybe an underlying deterministic theory including "hidden variables" or not has to be seen. At the moment it looks as if there's no such thing in sight, and from what we know, such a theory must involve non-local interactions and will be quite complicated to find. Perhaps one needs a new idea to test for the possibility of such a theory. After the very precise refutation of local HV theories by demonstrating the violation of the Bell inequality (and related predictions of QT), it may be time to find ways to rule out also non-local HV models or maybe find hints of their existence and structure. I think that this can only be done via experiments. A scholastical speculation of philosophers has never ever found a paradigm shifting new theory in the history of science!
 
  • #25
vanhees71 said:
I think that this can only be done via experiments. A scholastical speculation of philosophers has never ever found a paradigm shifting new theory in the history of science!

Actually there is at least one example I know of, the paradigm of punctuated equilibrium in evolutionary biology. Jay Gould and Eldredge were two persons detaining a humanist culture also on the history on biological sciences in particular and of philosophy in general surely not at the level of the casual reader, and that’s not coincidence.
 
  • #26
stevendaryl said:
Whether you deal with an open system, and take decoherence into account, or consider the universe as a whole, you should get the same answer.
No. The unitary evolution of the universe as a whole is an exact description, while picking out a subsystem and inventing a dissipative dynamics for it is already an approximation.

it is exacly the same kind of approximation that one has in classical mechanics when describing the casting of dice not in terms of a classical N-particle system consising of a myriad of atoms (of the die and its environment) but in terms of a stochastic process involving only the number of eyes cast. One loses so much information that the final description is only statistical and applies to a sample only, predicting probabilities realized approximately as relative frequencies. The classical N-particle system describes a single system deterministically, but the stochastic process for the few dof system of interest describes a large sample only.

Since already classically, going to a reduced description with few dof ruins determinism and introduces stohastic features, there is no reason at all to expect that the quantum situation would be different.

Note that it depends on the reduced system whether the resulting dynamics applies with good accuracy to a single system. When (as for the eyes of a die) the observables of interest depend sensitively on the details of the microscopic description then the reduced description is necessarily stochastic (applies to a large sample of copies only), while when (as for a crystal) the observables of interest are insensitive to the details of the microscopic description then a deterministic reduced description can be reasonably accurate.

Exactly the same holds in the quantum case. A Bell-type experiment is designed to be extremely sensitive to microscopic detail affected by microscopic details of the detector, and hence needs a large sample to match the predictions of the reduced, idealized description. On the other hand, a pointer reading is designed to be insensitive to the details of the apparatus and hence can be described deterministically (in dependence on the input current). But when the input current is so low that the response bcomes erratic, the sensitivity is such that unobservable microscopic detail matters, and again a stochastic description is needed to have some predictivity left.

There are simply two kinds of ensembles - those that may be viewed as fictitious copies of a single actual system,
where the single system observables already average over microscopic degreees of freedom of this system. This is what is described by statistical mechanics and gives the macroscopic, deterministic view. And those that only make sense for large samples because the unmodelled part already influences the behavior of the single system in an unpredictable way. This is what is described by stochastic processes and gives a statistical view only.

These two kinds of situations exist on the classical and on the quantum level, are treated with very similar mathematical techniques, and behave on the qualitative level very similarly.

All this is completely independent of discussions of superpositions, which are idealizations of the real, mixed state situation.
 
  • #27
A. Neumaier said:
All this is completely independent of discussions of superpositions, which are idealizations of the real, mixed state situation.

That's a matter of opinion. But in any case, it has become clear to me that I don't understand exactly what you're claiming. In particular, I don't know in what sense you are rejecting collapse, since it seems to me that what you're doing is exactly the same as invoking the collapse hypothesis.

Let me illustrate once again, using the simplest possible example, namely spin-1/2 particles, considering only spin degrees of freedom.

If you have two anti-correlated such particles, then they can be described by a pure state:

|\psi\rangle = \frac{1}{\sqrt{2}} (|u\rangle \otimes |d\rangle - |d\rangle \otimes |u\rangle), where |u\rangle and |d\rangle mean spin-up and spin-down relative to the z-axis.

If one of the particles (the first component of the product state) is to be measured by Alice and the other (the second component) by Bob, then we can use a density matrice to describe the state that is relevant for Bob's particle, by "tracing out" the degrees of freedom of Alice's particle. The result is:

\rho[bob] = \frac{1}{2} |u\rangle \langle u\rangle + \frac{1}{2} |d\rangle \langle d|

Now, for a mixed state of the form \sum_{i} p_i |\phi_i\rangle \langle \phi_i|, it is called a "proper" mixed state if the coefficients p_i represent ignorance about the true state of the system: p_i represents the probability that the true state of the system is the pure state |\phi_i\rangle. But in the case of entangled systems where you trace out the degrees of freedom of the other system, there may be no ignorance about the state of the composite system. Instead, the mixture is a result of a purely mathematical operation to go from a two-component density matrix to an effective one-component density matrix. It's called a "improper" mixed state in this case.

Now, let's work within the "collapse" interpretation. Suppose that Bob knows that Alice has measured the z-component of the spin of her particle. Under the "collapse" interpretation, there are two possibilities:
  1. Alice measured spin-up, so Bob's particle "collapsed" into the anti-correlated state |d\rangle.
  2. Alice measured spin-down, so Bob's particle "collapsed" into the state |u\rangle
Each possibility has probability \frac{1}{2}, so after Alice performed her measurement, but before Bob knows the result, he would describe his particle using the density matrix:

\rho[bob] = \frac{1}{2} |u\rangle \langle u| +\frac{1}{2} |d\rangle \langle d|

As you can see, this is exactly the same as the "improper" mixed state that he started with, but now it is interpreted as a "proper" mixed state, because now the coefficients \frac{1}{2} are interpreted as probabilities for various alternatives (his particle being spin-up or spin-down).

So there is a sense in which "collapse" just means the reinterpretation of an improper mixed state as a proper mixed state.
 
  • #28
I must agree that saying something to the effect that unitary evolution only applies to closed systems, whereas physics experiments are open systems so are allowed to collapse without producing any paradoxes, is merely pushing the problem off the page, rather than resolving it. Of course unitary evolution is an idealization, of course collapse is an idealization-- even concepts like the experiment and the universe are idealizations. We deal in idealizations, we are trying to find a self-consistent one. It might be foolish, our idealizations might never be internally consistent, but it is usually what we try to do anyway. So there's still a problem if, when we idealize our systems as closed, we are left with no way to explain how they would behave. Physics is an attempt to understand open systems that can be idealized as closed, so if unitary evolution cannot apply to systems like that, then unitary evolution isn't physics.
 
  • Like
Likes   Reactions: eloheim and maline
  • #29
It's much simpler without the collapse again. As long as Bob doesn't know what Alice has measured but knows that his particle is one of the entangled pair, he uses ##\hat{\rho}_{\text{Bob}}=\hat{1}/2## to describe his particle's state. It doesn't matter whether Alice has measured her particle's spin or not. As long as Bob doesn't know more than that he has one particle of a spin-entangled pair given as the singlet state you prepared in the beginning, he has to use this statistical operator to describe his particle according to the standard rules of quantum theory and the corresponding probabilistic interpretation of the states. Whether you call that mixture proper or improper is totally unimportant. In any case it just tells you in a formal way that Bob has an unpolarized spin-1/2 particle.

Of course, he knows more and thus if Alice measures her particle's spin and finds "up", and if somehow Bob gets this information, he knows that his particle must have spin "down#, and thus after getting this information he must describe his particle's state as the pure state ##\hat{\rho}_{\text{Bob}}'=|d \rangle \langle d|##, which again provides the probabilistic information about his particle and any spin measurements he might perform on it.

Nowhere a collapse is needed to explain the findings or real-world experiments as predicted by QT. It's just a quite simple scheme to predict probabilities for the outcome of measurements given the information of the measured system. The change of ##\hat{\rho}_{\text{Bob}}## which describes the situation that Alice and Bob have a spin-singlet state of two spin-1/2 particles to ##\hat{\rho}_{\text{Bob}}'## which describes the situation after Alice has measured her particle's spin and Bob got the information what she has measured, is just an update according to new information Bob gained from Alice's measurement and the standard rules of quantum theory.

Now, there's also no contradiction in the description of Bob's particle before and after getting informed about the outcome of Alice's measurement since probabilistic notions refer to ensembles of equally prepared situations, and indeed ##\hat{\rho}_{\text{Bob}}## (unpolarized particle) refers to one ensemble (Bob's particle in all prepared spin-singlet two spin-1/2 particle) and ##\hat{\rho}_{\text{Bob}}'## to another ensemble (namely the partial ensemble for which Alice has found spin up for her particle). This other ensemble is only half of the size of the full ensemble, and it has other properties due to Alice's measurement and Bob's choice to filter out only that partial ensemble where Alice has measured spin up for her particle.

There's no necessity to name one and the same mathematical object once an improper and then a proper mixed state. Rather they just describe Bob's particle's spin state. Period!
 
  • #30
vanhees71 said:
There's no necessity to name one and the same mathematical object once an improper and then a proper mixed state. Rather they just describe Bob's particle's spin state. Period!

For a proper mixed state, the mixture doesn't describe the particle, but describes Bob's knowledge about the particle. That seems like a difference to me.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
2K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 135 ·
5
Replies
135
Views
11K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K