Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I The typical and the exceptional in physics

  1. Sep 15, 2016 #1

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    For properly normalized extensive macroscopic properties (and this includes the center of mass operator), there is such a proof in many treatises of statistical mechanics. It is the quantum analogue of the system size expansion for classical stochastic processes. For example, see Theorem 9.3.3 and the subsequent discussion in my online book. But you can find similar statements in all books on stochastic physics where correlations are discussed in a thermodynamic context if you care to look, though usually for different, thermodynamically relevant variables. [more on this here]
    Indeed. But without simplifying assumptions one can never do anything in physics. Successful science and hence successful physics lives from concentrating on the typical, not on the too exceptional. No physicist ever considers (except in thought experiments) a system where a single electron is in a superposition of being here and 1000 miles away. It is completely uninteresting from the point of view of applications.


    Everywhere in physics one makes approximations which (in view of the inherent nonlinearity and chaoticity of the dynamics of physical systems when expressed in observable terms) exclude situations that are too exceptional. This is the reason why randomness is introduced in classical physics, and it is the reason why randomness appears in quantum physics. It is a consequence of approximations necessary to be able to get useful and predictable results.

    It is the same with statistical mechanics. Predictiions in statistical mechanics exclude all phenomena that require special efforts to prepare.

    For example, irreversibility is the typical situation, and this is heavily exploited everywhere in physics. But by taking special care one can devise experiments such as spin echos where one can see that the irreversibity assumption can be unwarranted.

    Similarly, it takes a lot of effort to prepare experiments where nonlocal effects are convincingly demonstrated - the typical situation is that nonlocal correlations die out extremely fast and can be ignored. As everywhere in physics if you want to observe the untypical you need to make special efforts. These may be valuable but they don't take anything away from the fact that under usual circumstancs these effects do not occur.

    If you want to have statements that are valid without exceptions you need to do mathematics, not physics.Mathematical arguments do not allow exceptions (or make statements of their very low probability).
     
  2. jcsd
  3. Sep 16, 2016 #2

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I think those are missing the point. I'm making a point just about the linearity of quantum evolution. If you have one state, [itex]|\psi_1\rangle[/itex] corresponding to an object being localized at location [itex]A[/itex], and another state, [itex]|\psi_2\rangle[/itex] corresponding to an object being localized at location [itex]B[/itex], and [itex]A[/itex] and [itex]B[/itex], then there is a third state, [itex]|\psi\rangle = \alpha |\psi_1\rangle + \beta |\psi_2\rangle[/itex] with a significant standard deviation for the position of the object. If you furthermore assume that [itex]A[/itex] and [itex]B[/itex] are separated by a potential barrier, then quantum mechanics has no mechanism that would tend to reduce that standard deviation through its evolution equations.
     
  4. Sep 16, 2016 #3

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    As the overwhelming success of statistical mechanics shows, macroscopic objects are correctly described by mixed states with a density operator of the form ##E^{-S/k_B}## and a suitable operator ##S## that depends on how detailed the observables of interest are. There is no superposition principle for such states!

    The superposition principle that you invoke is only a feature of pure states. But pure states are the exception in Nature - they exist only for systems with very few discrete degrees of freedom, and approximately for systems with few continuous degrees of freedom, for systems at temperatures very close to absolute zero, and for purely electronic systems at temperatures where the excited states are not yet significantly populated.
     
  5. Sep 16, 2016 #4

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Yes, I understand that it's possible to sweep the problems under the rug, and ignore them, but it's not an intellectually satisfying thing to do. Using the type of mixed state that you are talking about is already assuming the conclusion. You can't describe something as statistical perturbations around a central median value unless you already know that it has a small standard deviation. You can't prove that the standard deviation is small by using that representation--that's circular.

    Yes, I know that you can justify it empirically--it works. Empirically, macroscopic objects have state variables with small standard deviations. I agree that that's an empirical fact, but I'm disagreeing that it is explained by smooth evolution of the wave function. And it certainly isn't explained (in a noncircular way) by your assuming it to be true.
     
  6. Sep 16, 2016 #5

    kith

    User Avatar
    Science Advisor

    If I take the point of view that pure states don't make sense for macroscopic objects, what's the significance of microstates in statistical mechanics in general? For example, a very common notion of the entropy is how many microstates are compatible with a given macrostate. If the microstates don't represent the actual states which the macroscopic system can occupy, how does this make sense?
     
  7. Sep 16, 2016 #6

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    Microstates are an artifice that relates the quantum mechanical entropy to the information theoretical entropy. This is a powerful analogy,, but cannot be taken literally.

    Microstates never represent the actual states since they are defined to be eigenstates of the Hamiltonian. These are time invariant, hence if the actual state were one of these it would be this state for all times, and all expectations computed would come out utterly wrong.
     
  8. Sep 16, 2016 #7

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    As I explained in the other thread, nothing is circular; I am just using improved foundations. The foundations must always assume something,
    but this doesn't make it circular. As everywhere in foundations, one simply picks from the many available facts a few that aresimple and easy to motivate, in such a way that everything of interest can be derived from it.

    Assuming the form ##\rho=e^{-S/k_B}## is a very weak assumption that by no means in itself implies that the standard deviation is small. It only excludes density operators with a zero eigenvalue; nothing else. (But it excludes pure states, since these always have a zero eigenvalue.) Moreover, the form is invariant under unitary evolution, since eigenvalues are preserved by the dynamics. If the state of an isolated system has this form at one time then it has this form at all times. Thus it is a very natural assumption.

    In particular, for microscopic systems, assuming the form ##\rho=e^{-S/k_B}## doesn't imply anything about the size of the standard deviation.
    For example, in a 2-state system, any non-pure state can be written in this form. And people analyzing the accuracy of foundational quantum experiments have to work with mixed states (without zero eigenvalues, hence of my assumed form!) since this is the only way to account for the
    real behavoir of the photons and detectors involved - the pure state descriptions used in the theoretical arguments are always highly idealized.

    So how can my assumption have anything to do with circular reasoning???

    To conclude from my assumption that macroscopic observables have small standard deviations one needs a significant amount of additional input: The form of the macroscopic observables, the typical multiparticle form of the Hamiltonian (a huge sum of the standard 1-body plus 2-body plus perhaps 3-body potentials), and the fact that macroscopic objects are defined as those with a huge number of particles in local equilibrium. This input is valid only for macroscopic systems, and deriving from it a small standard derivation is still nontrivial work.
     
  9. Sep 16, 2016 #8

    atyy

    User Avatar
    Science Advisor

    A different point of view is that although the microstates of the canonical ensemble may be an artifice, it could still make sense to assign a pure state to a macroscopic object, eg. https://arxiv.org/abs/1302.3138.
     
  10. Sep 16, 2016 #9

    kith

    User Avatar
    Science Advisor

    So would you say that also in classical statistical mechanics, a microstate which is characterized by the positions and momenta of all the particles of a macroscopic object is only a calculation tool? And if a physicist in the classical era knew some macroscopic properties, he shouldn't have pictured the object to be in a certain unknown microstate?
     
  11. Sep 16, 2016 #10

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Why are the only states of quantum objects allowed to be eigenstates of the Hamiltonian? This doesn't make sense to me, and I guess it's not what you wanted to say. This is the conclusion drawn by many students after hearing QM1, because the professor used to only let them solve the time-independent Schrödinger equation ;-).
     
  12. Sep 16, 2016 #11

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    Every mixed state in a given Hilbert space is a pure state in a different Hilbert space, namely the Hilbert space of Hermitian trace-class operators with the trace inner product. But in this Hilbert space, the superposition principle is not valid, as not every pure state in this space is realized as a mixed state in the original Hilbert space.

    However, the paper you cited employs a different construction. This construction is very artificial in that it depends on random numbers and doesn't give the correct state for any N but only in the thermodynamic limit. It therefore cannot be used in systems that consist of a microscopic system and a macroscopic system, as needed for the measurement process. It is also very restricted in scope as it cannot account for macroscopic nonequilibrium systems, which is the most typical macroscopic situation.
     
  13. Sep 16, 2016 #12

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    Yes, since it ignores the identity of particles, which must be introduced by hand to get the physically correct statistics. Gibbs, who solved the issue in this way, was well aware of the limitations. He modeled the single system by an ensemble, well knowing that he considered fictitious objects in the ensemble whose average he was taking. The correct macroscopic properties appear only in the mixed state, not in the single microstate.
     
  14. Sep 16, 2016 #13

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    I was only referring to the microstates used to represent the entropy by a counting formula, as kith had asked for. One cannot count arbitrary pure states (of which there are uncountably many), only eigenstates. The alternative is to count cells in phase space, but it is obvious that the division into cells is an artifice, too.
     
    Last edited: Sep 16, 2016
  15. Sep 16, 2016 #14

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    The (micro-canonical) entropy, given the state is described by the statistical operator ##\hat{\rho}##, is
    $$S=-\mathrm{Tr}(\hat{\rho} \ln \hat{\rho}).$$
    The notion of "phase-space cells" is already a coarse grained concept in the spirit of the Boltzmann equation, where you consider a dilute gas. You take a macroscopically small but microscopically large volume (for simplicity a cube of length ##L##) and place it somewhere into the gas. To make sense of a "phase-space" cell a momentum operator should exist, and thus you assume the wave functions as having periodic boundary conditions. Then in a macroscopically small but microscopically large momentum-space volume element you have $$\mathrm{d}^3 \vec{p} \mathrm{d}^3 \vec{x}/(2 \pi \hbar)^3$$ (with ##\mathrm{d}^3 \vec{x}=L^3##). That introduces the phase-space measure for classical statistics, where it is missing in lack of a "natural unit" of action, which is provided in QT by Planck's constant ##\hbar##.
     
  16. Sep 16, 2016 #15

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    I'm afraid that I must disagree completely with the claims being made in this thread. They are false. (Well, I should say that they are false if there is no physical collapse of the wave function, there is only unitary evolution of the wave function.)

    Suppose you have two systems interacting. For simplicity, let's assume that one of those systems is extremely simple, and its Hilbert space has a two-element basis, [itex]|u\rangle[/itex] and [itex]|d\rangle[/itex]. Without specifying in detail the other system or the interaction between the two, let's suppose that the interaction between the two works in the following way:
    • If the composite system is placed initially in the state [itex]|u\rangle \otimes |start\rangle[/itex], then it will almost surely evolve into the state [itex]|u\rangle |U\rangle[/itex].
    • If the composite system is placed initially in the state [itex]|d\rangle \otimes |start\rangle[/itex], then it will almost surely evolve into the state [itex]|d\rangle |D\rangle[/itex].
    Then according to quantum mechanics, if the composite system is placed initially in the state [itex]\frac{1}{\sqrt{2}} |u\rangle |start\rangle + \frac{1}{\sqrt{2}} |d\rangle |start\rangle[/itex], then it will evolve into the state [itex]\frac{1}{\sqrt{2}} |u\rangle |U\rangle + \frac{1}{\sqrt{2}} |d\rangle |D\rangle[/itex]. If states [itex]|U\rangle[/itex] and [itex]|D\rangle[/itex] correspond to macroscopically different values for some state variable, then that state variable will not have a small standard deviation.

    I would think that this is beyond question. Quantum evolution for pure states is linear. Now, you can object that if the second system is supposed to be a macroscopic measurement device, then we can't talk about pure states. I really do consider that to be an obfuscating objection, rather than a clarifying one. You can do the same analysis using density matrices, rather than pure states. The conclusion will be the same---the standard deviation of the state variable for the measuring device will not remain small.
     
  17. Sep 16, 2016 #16

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    Your reasoning is incorrect, whether it should be called circular or not. Let me spell out a scenario that I think illustrates the problem.

    Consider a system with three parts:
    1. A source of electrons.
    2. A filter that only passes electrons that are spin-up in the x-direction.
    3. A detector that measures the z-component of the spins of the electrons.
    To be picturesque, let's assume that the detector has an actual pointer, an arrow that swings to the left to indicate a spin-up electron has been detected, and swings to the right to indicate a spin-down electron.

    The recipe for applying quantum mechanics that comes to us from the Copenhagen interpretation would say that the detector will in such a setup either end up pointing left, with probability 1/2, or pointing right, with probability 1/2.

    The Many-Worlds interpretation would say that, if we treat the whole setup quantum-mechanically, we end up with a superposition of two "possible worlds", one of which consists of the arrow pointing left, and the other consisting of an arrow pointing right.

    Both of these interpretations have their problems, but I can sort of understand them. You, on the other hand, seem to be claiming that pure unitary evolution leads not to two possibilities, the arrow pointing to the left, or the arrow pointing to the right. You seem to be claiming that unitary evolution will lead to just one of those possibilities. I think that's an astounding claim. I actually think that it's provably wrong, but alas, I'm not a good enough theoretician to prove it. But I think it contradicts everything that is known about quantum mechanics.
     
  18. Sep 16, 2016 #17

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    What occurs to me is that if you are correct, then that is tantamount to making the claim that Many-Worlds would actually only lead to one world. That's an astounding claim, and I don't think that it's a mainstream result.
     
  19. Sep 16, 2016 #18

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    No. Since the system is not isolated there is no unitary evolution. Unitary evolution applies only to systems that are completely isolated, and there is only a single such system, the whole universe. Once one acknowledges that a small system (such as the 3-part system your describe) is necessarily an open system, and there is no doubt about that, arguing with unitary evolution of the system is valid only for times far below the already extremely short decoherence time.

    Accepting the openness means having to use dissipative quantum mechanics, and there the notion of a pure state ceases to make sense. Instead one must work with density operators where talking about superpositions is meaningless. The density operators encode in case of your setting the state of an ensemble of 3-part systems (not of a single 3-part system, because one cannot prepare the macroscopic part in sufficient detail to know what happens at the violently unstable impact magnification level) the dissipative nature together with the bistability of the detector lead to a single random outcome with probabilities predicted by quantum mechanics. The pointer on each single system will have always a tiny uncertainty as predicted by statistical mechanics, but the ensemble of systems has an ensemble of pointers whose uncertainty can be arbitrarily large, since already a classical ensemble has this property.

    That bistability produces a random outcome is already familiar from the supposedly deterministic classical mechanics, where an inverted pendulum suspended at the point of instability will swing to a random side which is determined by tiny, unknown details in which the model and the real pendulum differ, and tiny, unknown details of the unmodeled environment. One should not expect quantum mechanics to be better behaved.
     
  20. Sep 16, 2016 #19

    Mentz114

    User Avatar
    Gold Member

    Unitary evolution is an abstract mathematical process by which we show how the predicted probabilities evolve in our model. Every experimental run has got a definate outcome but not through unitary evolution which can only talk about probabilities. The physics is described by dynamics not but by unitary evolution. Probability is not stuff.

    I don't see your problem. Getting a particular outcome says does not conflict with any prediction of QT.
     
  21. Sep 16, 2016 #20

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    You're arguing a point that is orthogonal to the point of this thread. I'm not arguing that definite values is incompatible with unitary evolution, I'm arguing that unitary evolution doesn't predict definite values. An additional assumption is needed, or a lot more computation.

    A. Neumaier seems to be arguing that unitary evolution leads to definite values.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: The typical and the exceptional in physics
  1. Typical energy (Replies: 3)

  2. Physical waves (Replies: 6)

Loading...