- 12,971
- 4,997
Where in the usual applications in statistical physics does the entropy become negative?
In Boltzmann's H-theorem. since there the energy has a continuous spectrum.Where in the usual applications in statistical physics does the entropy become negative?
In the above, you didn't discuss Boltzmann entropy but von Neumann entropy. It is definable only for trace class operators ##\rho##, which necessarily have discrete spectrum. Thus they produce discrete probability densities, which of cource may be interpreted in terms of information thoery. (Though it is artiicial as one cannot implement on the quantum level the decision procedure that gives rise to the notion of Shannon entropy. )In the usual definition you start with a finite volume and thus the discrete case. Obviously the entropy is always positive,
$$S=-\mathrm{Tr} \hat{\rho} \ln \hat \rho=-\sum_{i} \rho_i \ln \rho_i,$$
where ##\rho_i## are the eigenvalues of ##\hat{\rho}##. Since ##\hat{\rho}## is positive semidefinite and ##\mathrm{Tr} \hat{\rho}=1## you have ##\rho_i \in [0,1]##. For ##\rho_i=0## by definition in the entropy formula you have to set ##\rho_i \ln \rho_i=0##. Thus ##S \geq 0##. Taking the thermodynamic limit keeps ##S \geq 0##.
At which point in the derivation of the Boltzmann equation becomes ##S<0## then?
This is not a regularization. Real materials to which statistical mechanics applies have bounded volume.Well, supposedly that's then the deeper reason for the necessity of starting with some "regularization" such that the statistical operator has a discrete spectrum, and the thermodynamic limit is not that trivial.
At least Peres is more careful and consistent than you.
You are using Born's rule claiming, in (2.1.3) in your lecture notes, that measured are exact eigenvalues - although these are never measured exactly -, to derive on p.21 the standard formula for the q-expectation (what you there call the mean value) of known observables (e.g., the mean energy ##\langle H\rangle## in equilibrium statistical mechanics) with unknown (most likely irrational) spectra. But you claim that the resulting q-expectation is not a theoretical construct but is ''in agreement with the fundamental definition of the expectation value
of a stochastic variable in dependence of the given probabilities for the outcome of a measurement of this variable.'' This would hold only if your outcomes match the eigenvalues exactly - ''accurately'' is not enough.
But an idealized case can be no more than a didactical prop. Good foundations must be general enough to support all uses.We have discussed this a zillion of times. This is the standard treatment in introductory text, and rightfully so, because you have to first define the idealized case of precise measurements. Then you can generalize it to more realistic descriptions of imprecise measurements.
Well, it was neither discovered through how Born's rule is introducd by you, but through trying to theoretically understand black body radiation, the photoeffect, and spectra. Good foundations should not follow the way of discovery, which is often erratic and tentative, but should procvide the concepts needed to be able to handle the general situation by sraightforward specialization.In the way, how POVMs are introduced by Peres, you'd never have discovered QT as a tool to describe what's observed.
They are needed to characterize the equipment in such a way that one can talk reliably about efficiencies nd close various loopholes. Most of quantum optics works with POVMs rather than von Neumann measurements; these figure only in the simplified accounts.where in the quantum-optics literature have you ever needed POVMs rather than the standard formulation of QT to understand all the stringent Bell tests?
Yes, and that's why one needs POVMs rather than Bon's rule. Only in introductory courses is the latter sufficient.You need an operational meaning, i.e., how to apply the formalism to the operations with real-world equipment in the lab. That's what "interpretation" is all about.
They are used a lot for different purposes. For example, any quantum phase measurement is necessarily a POVM,Why then are POVMs so rarely used in practice? I've not seen them used in quantum optics papers dealing with the foundations. Can you point me to one, where they are needed to understand an experiment?
Well, this is very clearly using the standard quantum-theoretical formalism to construct (!) the POVM description of the measurement device. That rather confirms my view on the POVM formalism than is an argument against it.
What do you mean by ''construct the POVM description of the measurement device''?Well, this is very clearly using the standard quantum-theoretical formalism to construct (!) the POVM description of the measurement device.
Yes, but you didn't read it carefully enough.Are we talking about the same paper?
No. In equation (1) on p.2 of https://arxiv.org/pdf/1204.1893, $\Pi_n$ is an arbitrary positive operator from a POVM. Born's rule in its most general form is only the special case of (1) where all $\Pi$ are orthogonal projectors.Eq. (1) IS Born's rule. What else should it be?
##\Pi_n## is Hermitian and bounded, hence self-adjoint. But in the formula for probabilties in Born's rule only orthogonal projection operators figure;:Why is (1) not Born's rule? I thought ##\Pi_n## is still a self-adjoint operator.
This is an essential difference. It means that Born's rule is only a very special and often unrealistic (i.e., wrong!) case of the correct rule calculating probabilities for quantum detectors. To write down the correct rule in an introduction to quantum mechnaics would in fact be easier than writing down Born's rule, because one needs no discussion of the spectral theorem. Thus there is no excuse for giving in the foundations a special, highly idealized case in place of the real thing.The only difference is that the ##\Pi_n## are not orthonormal projectors as in the special case of ideal von Neumann filter measurements.
The method is quantum tomography, which is based on POVM's and semidefinite programming only, nothing else. Of course it needs sources with known density operator. Only for these, texbook quantum optics is used.Also, as far as I understand the paper is about, how to determine the POVM for a given apparatus, and the necessary analysis is through the standard formalism of measurements in quantum optics
On a closed system, one cannot make a measurement at all, not even one satisfying Born's original rule.I still don't see, in which sense the POVM formalism is an extension of standard QT. To the contrary it's based on standard QT, applied to open systems in contradistinction to the idealized description of measurements on closed systems.
Another important difference is that a POVM measurment makes no claim about which values are measured.The only difference is that the ##\Pi_n## are not orthonormal projectors as in the special case of ideal von Neumann filter measurements.
This is not the introduction, but after having already introduced POVMs he shows that the concept is consistent with the traditional setting, but on an (unphysical, just formally constructed) extended Hilbert space.he introduces the POVM using the Born rule in the standard way for a closed system tracing out what he calls the "ancilla".
This is the whole "church of the smaller/larger Hilbert space" issue in quantum foundations. Whether POVMs are fundamental or if they're always PVMs with ancillas.This is not the introduction, but after having already introduced POVMs he shows that the concept is consistent with the traditional setting, but on an (unphysical, just formally constructed) extended Hilbert space.
Well, at least once you go to QFT, there is no natural way to add the ancillas. It is a purely formal trick to reduce POVMs and related measurement issues to the standard (problematic) foundations.This is the whole "church of the smaller/larger Hilbert space" issue in quantum foundations. Whether POVMs are fundamental or if they're always PVMs with ancillas.
I agree. Just to inform people of the terms should they encounter them. I myself can't my sense of the "always due to an ancilla" view of POVMs.Well, at least once you go to QFT, there is no natural way to add the ancillas. It is a purely formal trick to reduce POVMs and related measurement issues to the standard (problematic) foundations.
This happens in the objective Bayesian probability interpretation. There exists some reality, and there exists incomplete but nonetheless objective information about it. It defines a probability distribution - the one which maximizes entropy given the particular information.How can the wave function be not ontic when its dynamics determines the positions at future times?
Something nonexistent cannot affect the existent.
This is a very questionable statement.there exists incomplete but nonetheless objective information about it. It defines a probability distribution - the one which maximizes entropy given the particular information.