Evaluate this paper on the derivation of the Born rule

In summary, The paper discusses the Curie Wiess model of the quantum measurement process and how it can be used to derive the Born rule.
  • #71
stevendaryl said:
I'm saying (actually, I did say) that an electron that is in a superposition of spin-up and spin-down is neither spin-up nor spin-down until we measure it. What this implies about dead cats is complicated.
According to standar QT in this case the spin-##z## component is indetermined, and it's not indetermined, because we don't know it but it's really indetermined. All that is known about ##\sigma_z## are the probabilities to find the two possible values when measuring the observable ##\sigma_z##. That's it. There's no more according to QT.
 
Physics news on Phys.org
  • #72
Prathyush said:
I can construct a detailed experiment, but that would require time. Would you agree with the following statement, when a measurement is performed, the state of the system(meaning information available to us about it) in general changes to reflect the outcome of the measurement.
In the Schrödinger picture the Statistical Operator changes according to the full Hamiltonian of the system. The Hamiltonian must include the interaction with the complete setup. That's all there is, and it's as well valid in classical physics. The only difference is that in classical physics, where you measure macroscopic observables you can make the influence of the measurement apparatus arbitrarily small. This is not the case when you measure microscopic observables. E.g., to measure the electric field of an electron there's no test charge to do so because any charge is at least as big as the electron charge.
 
  • #73
Prathyush said:
I don't understand at all what you are saying, Collapse is not a physical process. Wavefunction is our description of the system, that is all.
Well, in physics you don't want a self-contradictory description. The collapse hypothesis is incompatible with the very foundations of physics, i.e., the causality structure of relativistic spacetime. So why should you assume such a thing? I don't know since I don't know a single example of a real-world experiment, where this assumption is really needed.
 
  • #74
Prathyush said:
This is clearly a question that I haven't thought about in depth, this discussion was extremely fruitful to me because it brought these issues into the forefront.
Chapters 7-10 in my online book derive everything without assuming Born's rule anywhere.
 
  • #75
Well in Chpt. 8 it's just QT in the ##C^*##-algebra formulation. You don't talk about probabilities but about expectation values etc. So just not mentioning the word "probability" doesn't mean that you don't use probability theory.
 
  • #76
vanhees71 said:
define the expectation value of an observable, represented by a self-adjoint operator ##\hat{A}## by
$$\langle A \rangle_{\rho}=\mathrm{Tr}(\hat{\rho} \hat{A}).$$
This is Born's rule.
How can a mathematical definition say anything about the interpretation?

The formula you quote is just part of shut up and calculate. Interpretation enters only if you want to give the expectation value a meaning in terms of measurement.

Standard quantum theory consists of two parts:
  • (S) the shut up and calculate part, which just derives mathematical consequences of definitions, and
  • (I) the interpretation of the formulas from the shut up and calculate part in terms of the real world.
Calling trace##\rho A## the expectation of ##A## and denoting it by ##\langle A\rangle## belong to (S). All rules and results used in statistical mechanics to deduce consequences from it also belong to (S). Only telling what ##\langle A\rangle## should mean belongs to (I). In particular, the shut up and calculate part gets different interpretations depending on how one interprets ##\langle A\rangle##. As equilibrium thermodynamics shows, an interpretation in terms of an average over real measurements is not warranted for macroscopic systems where usually only a single measurement is made and the averaging becomes vacuous. Instead, the standard interpretation of ##\langle A\rangle## in any textbook of statistical thermodynamics (in particular the famous book by Callen) is to equate it with the measured macroscopic value since this identification (and only this) allows one to deduce equilibrium thermodynamics from statistical physics.

vanhees71 said:
what's measured on macroscopic systems usually are indeed very "coarse-grained observables",
In the derivation of equilibrium thermodynamics from statistical physics, coarse graining is never used.

vanhees71 said:
As far as I know, the definition of POVM measurements relies also on standard quantum theory, and thus on Born's rule (I've read about them in A. Peres, Quantum Theory: Concepts and Methods). It just generalizes "complete measurements" by "incomplete ones". It's not outside the standard rules of quantum theory.

1. The POVM formalism belongs to (S); the interpretation in terms of measurement of physical processes in the Lab belongs to (I). Clearly, Born's rule is only an extremal case of the POVM interpretation.

2. POVM's can be analyzed in terms of Born's rule only by going to a fictitious bigger Hilbert space and defining there a new dynamics. This is not the dynamics that one gets naturally from the given system.

3. Even though it can be derived from Born's rule in the above way, a measurement by means of a POVM is not governed itself by Born's rule. You seem to equate everything that can somehow be derived from Born's rule wit Born's rule itself. But this is a severe abuse of language.

For example, homodyne photon measurement measures both the frequency and the phase of a photon, though both are noncommuting variables. This has nothing at all to do with the kind of measurement following the Born rule.
 
Last edited:
  • #77
vanhees71 said:
Well, usual statistics textbook start with equilibrium distributions and define, e.g., the grand canonical operator
$$\hat{\rho}=\frac{1}{Z} \exp[-\beta (\hat{H}-\mu \hat{Q})],$$
to evaluate expectation values using Born's rule, leading to the Fermi- and Bose-distribution functions.

No. these expectation values (of macroscopic quantities) are evaluated with shut up and calculate, not with Born's rule! The results obtained in equilibrium are then identified with the thermodynamic values measured. Nowhere an interpretation in terms of Born's rule enters.

To apply Born's rule to ##\langle H\rangle##, say, one would have to measure astronomically many spectral lines, then do a numerical analysis to extract the energy levels (doing the calculations with a horrendous number of digits to be able to resolve them reliably), and then perform an average over the astronomically large number of energy levels. This is completely ridiculous.

Instead, only as many measurements are performed as there are thermodynamic degrees of freedom, and these are compared with the formulas obtained by shut up and calculate.

vanhees71 said:
Classical statistical mechanics is also based on probability concepts since Boltzmann & Co. I don't know, which textbooks you have in mind!
Boltzmann worked with an ideal gas, where one can apply statistical reasoning (though not Born's rule) by averaging over independent atoms. But it works only there!

In his famous textbook from 1901 where the grand canonical ensemble was introduced, Gibbs never averages over atoms, but over ensembles of macroscopic systems! He was well aware that his ensembles were fictitious ones, made of imagined copies of the macroscopic system at hand, needed to justify the application of statistical concepts to single cases. At his time, mathematics wasn't yet as abstract as today where one can use any mathematical concept as a tool in quite diverse applications where the same mathematical notion has completely different uses and interpretations as long as the axiomatically defined postulates are satisfied. Thus he had to take recourse to a fictitious average where today just a reference to shut up and calculate suffices.

As physics cannot depend on imagined but unperformed experiments, it is clear that his expectations are not averages over many experiments but refer to the single case at hand.
 
  • #78
Prathyush said:
when a measurement is performed, the state of the system(meaning information available to us about it) in general changes to reflect the outcome of the measurement.
When a measurement is performed, the state of the detector changes to a state encoding the measurement result. What happens to the tiny system depends a lot on what it is and how it is measured; for example when measuring a photon it is usually absorbed and no longer exists after the measurement.

The paper in the OP analyzes a very special situation where the measurement is von-Neumann, so that the state after the measurement is an eigenstate corresponding to the measured eigenvalue.
 
  • Like
Likes dextercioby
  • #79
Ddddx said:
The word "collapse" was never used by the founders of quantum theory.
Von Neumann introduced the concept in 1932 under a different name; he called it state vector reduction. The name is not that relevant. What actually happens is.
 
Last edited:
  • #80
vanhees71 said:
Well in Chpt. 8 it's just QT in the ##C^*##-algebra formulation. You don't talk about probabilities but about expectation values etc. So just not mentioning the word "probability" doesn't mean that you don't use probability theory.
By the same reasoning, not mentioning the words coordinates in abstract differential geometry would not mean that you don't use coordinates. The point is that coordinates are not unique, and the meaning of them depends on making choices. Thus not using coordinates is a virtue, and really means that no coordinates are used.

Similarly, there is no probability theory involved in C^*-algebras - nowhere in any definition or result. Probability is not defined until you choose a representation in a separable Hilbert space and an orthonormal basis in it, and it is basis-dependent, whereas the C^*-algebra approach is basis independent. Which choice is the correct one is one of the problems making up the measurement problem that you so despise. But for statistical mechanics one never needs to make a choice of basis as the results are all basis independent. So probability never enters.
 
  • #81
A. Neumaier said:
No. these expectation values (of macroscopic quantities) are evaluated with shut up and calculate, not with Born's rule! The results obtained in equilibrium are then identified with the thermodynamic values measured. Nowhere an interpretation in terms of Born's rule enters.

To apply Born's rule to ##\langle H\rangle##, say, one would have to measure astronomically many spectral lines, then do a numerical analysis to extract the energy levels (doing the calculations with a horrendous number of digits to be able to resolve them reliably), and then perform an average over the astronomically large number of energy levels. This is completely ridiculous.

Instead, only as many measurements are performed as there are thermodynamic degrees of freedom, and these are compared with the formulas obtained by shut up and calculate.Boltzmann worked with an ideal gas, where one can apply statistical reasoning (though not Born's rule) by averaging over independent atoms. But it works only there!

In his famous textbook from 1901 where the grand canonical ensemble was introduced, Gibbs never averages over atoms, but over ensembles of macroscopic systems! He was well aware that his ensembles were fictitious ones, made of imagined copies of the macroscopic system at hand, needed to justify the application of statistical concepts to single cases. At his time, mathematics wasn't yet as abstract as today where one can use any mathematical concept as a tool in quite diverse applications where the same mathematical notion has completely different uses and interpretations as long as the axiomatically defined postulates are satisfied. Thus he had to take recourse to a fictitious average where today just a reference to shut up and calculate suffices.

As physics cannot depend on imagined but unperformed experiments, it is clear that his expectations are not averages over many experiments but refer to the single case at hand.
I think that's just fight about semantics, what you call Born's rule. For me it's the probabilistic interpretation of the state. Usually it's formulated for pure states and then argued for the more general case of mixed states. The upshot is that you can describe the state as a statistical operator ##\hat{\rho}## with the meaning in terms of probabilities given in one of my postings above:
$$P(a)=\sum_{\beta} \langle a,\beta|\hat{\rho}|a,\beta \rangle.$$
That you can identify ##\langle A \rangle## with "the measured value" for macroscopic systems is due to the fact that ##\langle A \rangle## is an observable like, e.g., the center of mass position of some object or a fluid cell and that such observables tend to be sharply peaked around the average value. Of course, a single measurement doesn't tell anything, as everybody learns in the introductory lab in any physics curriculum. What you call "measurement" is indeed not formalized in theory but determined by concrete experimental setups in the lab and real-world measurement devices like detectors.

Of course, you can extend this ideal picture of precisely measured quantities also for microscopic observables with the more general case of incomplete measurements which is formalized as the POVM formalism, but that's finally also based on the fundamental postulates, including Born's rule (at least in the way it's introduced by Peres in his book).
 
  • #82
A. Neumaier said:
By the same reasoning, not mentioning the words coordinates in abstract differential geometry would not mean that you don't use coordinates. The point is that coordinates are not unique, and the meaning of them depends on making choices. Thus not using coordinates is a virtue, and really means that no coordinates are used.

Similarly, there is no probability theory involved in C^*-algebras - nowhere in any definition or result. Probability is not defined until you choose a representation in a separable Hilbert space and an orthonormal basis in it, and it is basis-dependent, whereas the C^*-algebra approach is basis independent. Which choice is the correct one is one of the problems making up the measurement problem that you so despise. But for statistical mechanics one never needs to make a choice of basis as the results are all basis independent. So probability never enters.
Well, we talk about physics not pure mathematics, and you need a rule (called "interpretation") to relate your formalism to what's measured in the real world. This is done by deciding which observable you measure and this determines the basis you have to use to calculate the corresponding probabilities. The ##C^*## formalism is, as far as I can see, equivalent to the standard definition of QT with the advantage to give a more clear mathematical determination of the operator algebra.
 
  • #83
vanhees71 said:
I think that's just fight about semantics, what you call Born's rule. For me it's the probabilistic interpretation of the state.
Then you should say the latter whenever you want to say the former, or you will definitely earn misunderstanding. For the two are far from synonymous. The standard semantics is the one described by wikipedia; nobody apart from you has this far too general usage you just announced. In particular, equating the two is meaningless in the present context - this thread is about deriving Born's rule from statistical mechanics, not about deriving the probabilistic interpretation of quantum mechanics.

vanhees71 said:
Well, we talk about physics not pure mathematics, and you need a rule (called "interpretation") to relate your formalism to what's measured in the real world. This is done by deciding which observable you measure and this determines the basis you have to use to calculate the corresponding probabilities.
Which observable is measured in homodyme photon detection, the example mentioned before?

Moreover, your recipe (calculate the corresponding probability) only works in simple cases where you have an exactly solvable system, hence can evaluate the partition function as a sum over joint eigenstates. But the latter is just one possible way of organizing the computations (shut up and calculate - no interpretation is needed to express the trace as a sum over eigenvalues) and fails in more complex situations.

In equilibrium thermodynamics one wants to measure the total mass of each chemical component (which may be a complex molecule) and the total energy of a macroscopic interacting system. In these cases on never calculates the thermodynamic equation of state in terms of probabilities. Instead one uses mean field approximations and expansions beyond, as you know very well!

In general, a partition sum is just a piece of shut uup and calculate, as it is a mathematically defined expression valid without any interpretation. The interpretation is about relating the final results (the equation of state) to experiments, and this does not involve probabilities at all; it is done simply by equating the expectation of a macroscopic variable with the measured value. Thus this is the true interpretation rule used for macroscopic measurement. Everything else (talk about probabilities, Born's rule, etc.) doesn't enter the game anywhere (unless you want to complicate things unnecessarily, which is against one of the basic scientific principles called Ockham's razor).
 
Last edited:
  • #84
vanhees71 said:
The C^* formalism is, as far as I can see, equivalent to the standard definition of QT with the advantage to give a more clear mathematical determination of the operator algebra.
It is equivalent on the shut up and calculate level, but has a definite advantage of clearness not only on the conceptual but also on the interpretational level. It dispenses with Born's rule, the philosophically problematic concept of probability, and the choice of basis, except when a concrete experiment singles out a concrete basis.

Another advantage is that it directly works with mixed states, which are the by far most common states in Nature, and avoids its decomposition
vanhees71 said:
The upshot is that you can describe the state as a statistical operator ##\hat{\rho}## with the meaning in terms of probabilities given in one of my postings above:
$$P(a)=\sum_{\beta} \langle a,\beta|\hat{\rho}|a,\beta \rangle.$$
which is completely unphysical since the pieces in the sum are far from unique and therefore cannot have a physical interpretation. Different decompositions are physically undistinguishable!
 
Last edited:
  • #85
A. Neumaier said:
I refereed the paper in question here.
I added several paragraphs to the review, summarizing what was actually derived, and pointing out the declared interpretative assumptions of the authors of the paper mentioned in the OP. These assumptions were made explicit in a much later paper, namely:

A.E. Allahverdyan, R. Balian and T.M. Nieuwenhuizen,
A sub-ensemble theory of ideal quantum measurement processes,
Annals of Physics 376 (2017): 324-352.
https://arxiv.org/abs/1303.7257

The traditional motivational introduction to statistical mechanics is based on mixing an ensemble of pure states. However, none of the results of statistical mechanics depends on this motivational prelude. Instead of assuming statistical mechanics without spelling out the precise assumptions made (as in the paper mentioned in the OP and in the big 200 page article with all formal details) - which might suggest that their derivation depends on the traditional foundations -, the authors are here far more explicit about the assumptions necessary to get their results.

They take in this quite recent paper statistical mechanics as a purely formal theory (i.e., in the shut up and calculate mode) and then give new interpretational principles for how this formalism is to be interpreted. In particular, their interpretational principles are independent of Born's rule (as a statement about measurement). As a consequence, the derivation of Born's rule is a result, not a silent assumption. For the present discussion, the most relevant statements from this paper are (emphasis by the authors, but notation for the density operator adapted to the present context):

Allahverdyan Balian and Nieuwenhuizen said:
One should therefore, as done for q-bits, distinguish ##\langle\hat O\rangle=##tr##\rho\hat O## from an ordinary expectation value by denominating it as a
``q-expectation value''. Likewise, a ``q-correlation'', the q-expectation value of a product of two observables, should not be confused with an
ordinary correlation. Also, the q-expectation value ##\langle\hat\pi\rangle## of a projection operator ##\hat\pi## is not an ordinary probability, but a formal object which we will call ``q-probability'' rather than ``probability''. Born's rule is not postulated here, it will come out (Subsec. 6.4) as a property of the apparatus at the issue of an ideal measurement.

Allahverdyan Balian and Nieuwenhuizen said:
Interpretative principle 1. If the q-variance of a macroscopic observable is negligible in relative size its q-expectation value is identified with the value of the corresponding macroscopic physical variable, even for an individual system.

These statements exactly match the assumptions made in my thermal interpretation of quantum mechanics.

By the way, they cite Bell's theorem as a main reason why one cannot simply equate the q-expectations with expectation values in the classical sense since some of the properties of expectations valid in the classical case fail to hold in the quantum case.
 
Last edited:
  • #86
A. Neumaier said:
How can a mathematical definition say anything about the interpretation?
This is an interesting question. Most physicists seem to think it does say everything about the interpretation, in other words the formalism is not neutral as the "shut up and calculate" lemma seems to imply.
By your posts I infer that what you think must inform the interpretation is the macroscopic measurement, but this thread's discussion seems to go in circles, because the formalsm doesn't seem to distinguish clearly between uncertainties derived from classical lack of knowledge and from inherent impossiblity of the theory. So the important distinction between statistics and probabilities that has been made here cannot be resolved by the formalism. But just going to the statistical mechanics interpretation seems to lack a new definition of measurement, or I can't see the improvement over the basis-dependent probabilities, how is the macroscopic measurement of the specific experiment or observation connected to the formalism in a basis-independent way not relyng on the Born rule?.

Also a starting point here seems to be that the Born's rule is just a postulate about probabilities, not acknowledging that the key feature of the rule is that there is a differentiating element with respect to the usual probabilities that is also passed over by the formalism.
 
  • #87
RockyMarciano said:
the formalsm doesn't seem to distinguish clearly between uncertainties derived from classical lack of knowledge and from inherent impossiblity of the theory.
The quantum formalism is independent of knowledge. Subjective issues have no place in physics, except for judging the adequacy of the assumptions and approximations made.
RockyMarciano said:
But just going to the statistical mechanics interpretation seems to lack a new definition of measurement
A measurement of a microscopic system is a reading from a macroscopic device that contains information about the state of the microscopic system. The nature of the coupling and the dynamical analysis must tell which information is encoded in the measurement result, to which accuracy, and with which probabilities.

This definition of a measurement is operationally checkable since one can prepare the states and read the measurement results and can thus compare the theory with the calculations without any ambiguity of concepts.

The only interpretation needed is how the reading from the macroscopic device is related to its macroscopic properties. In the thermal interpretation, this poses no problem at all. The consequences for the microscopic theory are then a matter of deduction, not one of postulation.

Whereas Born's rule is very incomplete in that it doesn't say the slightest thing about what constitutes a measurement, so it is an uncheckable piece of philosophy not of science, unless you know already what measurement means. But this requires knowing a lot of quantum physics that goes into building high quality measurement devices for quantum objects. Thus foundations based on Born's rule are highly circular - unlike foundations based on a properly understood statistical mechanics approach.
 
  • Like
Likes Mentz114
  • #88
A. Neumaier said:
Then you should say the latter whenever you want to say the former, or you will definitely earn misunderstanding. For the two are far from synonymous. The standard semantics is the one described by wikipedia; nobody apart from you has this far too general usage you just announced. In particular, equating the two is meaningless in the present context - this thread is about deriving Born's rule from statistical mechanics, not about deriving the probabilistic interpretation of quantum mechanics.Which observable is measured in homodyme photon detection, the example mentioned before?

Moreover, your recipe (calculate the corresponding probability) only works in simple cases where you have an exactly solvable system, hence can evaluate the partition function as a sum over joint eigenstates. But the latter is just one possible way of organizing the computations (shut up and calculate - no interpretation is needed to express the trace as a sum over eigenvalues) and fails in more complex situations.

In equilibrium thermodynamics one wants to measure the total mass of each chemical component (which may be a complex molecule) and the total energy of a macroscopic interacting system. In these cases on never calculates the thermodynamic equation of state in terms of probabilities. Instead one uses mean field approximations and expansions beyond, as you know very well!

In general, a partition sum is just a piece of shut uup and calculate, as it is a mathematically defined expression valid without any interpretation. The interpretation is about relating the final results (the equation of state) to experiments, and this does not involve probabilities at all; it is done simply by equating the expectation of a macroscopic variable with the measured value. Thus this is the true interpretation rule used for macroscopic measurement. Everything else (talk about probabilities, Born's rule, etc.) doesn't enter the game anywhere (unless you want to complicate things unnecessarily, which is against one of the basic scientific principles called Ockham's razor).
In the Wikipedia article in the first few lines they give precisely the definition, I gave some postings above. I'm using the standard terminology, while you prefer to deviate from it so that we have to clarify semantics instead of discussing physics.

In homodyne detection what's measured are intensities as in any quantum-optical measurement. I refer to Scully&Zubarai, Quantum Optics. One application is to characterize an input signal (em. radiation) (annihilation operator ##\hat{a}##) using a reference signal ("local oscillator", annihilation operator ##\hat{b}##). They are sent through a beam splitter with transmittivity ##T## and reflectivity ##R##, ##T+R=1##. The states at the two output channels are then defined by (I don't put hats on top of the operators from now on):
$$c=\sqrt{T} a + \mathrm{i} \sqrt{1-T} b, \quad d=\sqrt{1-T}a + \sqrt{T} b.$$
What's measured is the intensity at channel ##c##, i.e., ##c c^{\dagger}##.

If the local oscillator is in a coherent state ##|\beta_l \rangle## you get for the expectation value
$$\langle c^{\dagger} c \rangle=T \langle a^{\dagger} a \rangle + (1-T)|\beta_l|^2 - 2 \sqrt{T(1-T)} |\beta_l| \langle X(\phi_l+\pi/2)$$
with
$$X(\phi)=\frac{1}{2} (a \exp(-\mathrm{i} \phi)+a^{\dagger} \exp(\mathrm{i} \phi).$$
All this is done within standard QT using Born's rule in the above given sense. I don't see, which point you want to make with this example. It's all standard Q(F)T.

Now you switch to partition sums, i.e., thermodynamical systems. Take as an example black-body radiation (or any other ideal gas of quanta), i.e., a radiation field in thermal equilibrium with the walls of a cavity at temperature ##T=1/\beta##.
The statistical operator is
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H}).$$
The partition sum here is
$$Z=\mathrm{Tr} \exp(-\beta \hat{H}).$$
The Hamiltonian is given by (I use a (large) quantization volume ##V## with periodic boundary conditions for simplicity)
$$\hat{H}=\sum_{\vec{n} \in \mathbb{Z}^3, h \in \{\pm 1\}} \omega(\vec{p}) \hat{a}^{\dagger}(\vec{p},h) \hat{a}(\vec{p},h), \quad \vec{p} = \frac{2 \pi}{L} \vec{n}.$$
For the following it's convenient to evaluate the somewhat generalized partition function
$$Z=\mathrm{Tr} \exp(-\sum_{\vec{n},h} \beta(\vec{n},\lambda) \omega_{\vec{n}} \hat{N}(\vec{n},h).$$
Using the Fock states leads to
$$Z=\prod_{\vec{n},h} \frac{1}{1-\exp(-\omega_{\vec{n}} \beta(\vec{n},\lambda))}.$$
The thermodynamic limit is given by making the volume ##V=L^3## large:
$$\ln Z=-\sum_{\vec{n},h} \ln [1-\exp(-\omega_{\vec{n}} \beta(\vec{n},\lambda)]=-V \sum_{h} \int_{\mathbb{R}^3} \frac{\mathrm{d}^3 \vec{p}}{(2\pi)^3} \ln [1-\exp(-\beta(\vec{p},h) |\vec{p}|].$$
The spectrum (i.e., the mean number of photons per three-momentum) is calculated by
$$\langle N(\vec{p},h) \rangle=-\frac{1}{\omega_{\vec{p}}} \frac{\delta}{\delta \beta(\vec{p},h)} \ln Z=\frac{V}{\exp(\beta |\vec{p}|)-1}.$$
It's measured with help of a spectrometer (or with the Planck satellite for the cosmic microwave background).

It's all standard QT and uses, of course, Born's rule.
 
  • #89
A. Neumaier said:
Whereas Born's rule is very incomplete in that it doesn't say the slightest thing about what constitutes a measurement, so it is an uncheckable piece of philosophy not of science, unless you know already what measurement means. But this requires knowing a lot of quantum physics that goes into building high quality measurement devices for quantum objects. Thus foundations based on Born's rule are highly circular - unlike foundations based on a properly understood statistical mechanics approach.
No theory (also not classical mechanics or field theory) say "the slightest thing about what constitutes a measurement". Physical observables are defined by concrete measurement devices in the lab, not by a theoretical formalism. The theoretical formalism rather gives a mathematical description of such observations. As the name already tells, a statistical mechanics (or rather physics) approach, also uses probabilities in its foundations, or what else is statistics than applied probability theory?

Only with Born's rule the quantum formalism gets interpretible without contradictions with experience. It's not enough to give the other postulates (concerning the formal math describing quantum kinematics and dynamics).
 
  • #90
vanhees71 said:
In the Wikipedia article in the first few lines they give precisely the definition, I gave some postings above. I'm using the standard terminology, while you prefer to deviate from it so that we have to clarify semantics instead of discussing physics.
Well, when the concepts are not clear one must first clarify the semantics before one can communicate physics.

You give conflicting definitions of what you mean by the Born rule, but not all can be true. For example you said in post #27 that the definition ##\langle A\rangle =##tr ##\rho A## is Born's rule. Where in the first few lines of the Wikipedia article is this stated?
 
  • #91
vanhees71 said:
Physical observables are defined by concrete measurement devices in the lab, not by a theoretical formalism.
These concrete devices are calibrated by using quantum mechanical theory for checking that they actually do what they do. Without having already quantum mechanics working one cannot validate any of these checks. One doesn't know the state a laser produces without knowing the theory of the laser, etc. Thus one cannot check the definition of a physical observable (such as spin up) that goes into the theory with which something is computed without having already the theory. This is standard circularity.
 
  • #92
vanhees71 said:
Now you switch to partition sums, i.e., thermodynamical systems.
and you choose an exactly solvable system, which I said are very special cases, the only cases where one can use the sum over probabilities to calculate the partition function. Yes, in particular cases, Born's rule applies and probabilities are used to do the calculations. But these are very special cases.

And even in your partition sum there is not a single measurement but only computations, hence Born's rule (which, according to wikipedia, is ''a law of quantum mechanics giving the probability that a measurement on a quantum system will yield a given result'') doesn't apply. You pay lip service to Born's rule but you don't use it in your computations.
 
  • #93
vanhees71 said:
In homodyne detection what's measured are intensities
What I meant was using homodyne detection to measure simultaneously the quadratures (which are noncommuting optical analogues of position and momentum) by splitting the photon beam 50:50 and then making homodyne measurements on each beam. Of course the raw measurements are measurements of intensities, but in terms of the input, what is measured (inaccurately, within the validity of the uncertainty relation) are noncommuting quadratures.
 
Last edited:
  • #94
vanhees71 said:
Physical observables are defined by concrete measurement devices in the lab, not by a theoretical formalism.
Take as very concrete example the Hamiltonian, which is the observable that goes into the computation of the partition function of a canonical ensemble. How do you define this observable by a concrete measurement device in the lab, that would give according to Born's rule as measurement result the ##k##th energy level ##E_k## with probability ##e^{-\beta E_k}##?

The impossibility of giving such a device proves that defining the meaning of observables and of (accurate) measurements is a thoroughly theoretical process, not just one of experimentation!
 
  • #95
A. Neumaier said:
These concrete devices are calibrated by using quantum mechanical theory for checking that they actually do what they do. Without having already quantum mechanics working one cannot validate any of these checks. One doesn't know the state a laser produces without knowing the theory of the laser, etc. Thus one cannot check the definition of a physical observable (such as spin up) that goes into the theory with which something is computed without having already the theory. This is standard circularity.
Sure, it's well known that physics is "circular" in this way. You need theory to construct measurement devices. At the same time these devices are used to check the very theory on which there construction is based. In a sense, testing the theories is just a test of the consistency of the theory with the observations made.

Spin is a good example. The Stern Gerlach experiment was undertaken before quantum theory in its modern form and before also the modern notion of "spin" has been discovered. The theory used was classical mechanics with some ideas from the Bohr-Sommerfeld model and what was tested were hypotheses based on it. The main trouble in this context was the "anomalous Zeeman effect" which could not be well explained by the Bohr-Sommerfeld model. For a very amusing account of the history (including the fact that without bad cigars the SG experiment most likely would have failed ;-)), see

https://faculty.chemistry.harvard.edu/files/dudley-herschbach/files/how_a_bad_cigar_0.pdf
 
  • Like
Likes Leo1233783
  • #96
A. Neumaier said:
What I meant was using homodyne detection to measure simultaneously the quadratures (which are noncommuting optical analogues of position and momentum) by splitting the photon beam 50:50 and then making homodyne measurements on each beam. Of course the raw measurements are measurements of intensities, but in terms of the input, what is measured (inaccurately, within the validity of the uncertainty relation) are noncommuting quadratures.
I'm not familiar with all applications of homodyne measurements. Before I can comment on this, please give a definition of what experiment you precisely have in mind. What's measured in Quantum Optics are, technically speaking, usually correlation functions of field operators. Such correlation functions sometimes refer to "incomplete measurements" of incompatible observables. How does this, in your opinion, invalidate the standard postulates of Q(F)T? I'm not aware that quantum optics is based on another theory than standard QT.
 
  • #97
A. Neumaier said:
Take as very concrete example the Hamiltonian, which is the observable that goes into the computation of the partition function of a canonical ensemble. How do you define this observable by a concrete measurement device in the lab, that would give according to Born's rule as measurement result the ##k##th energy level ##E_k## with probability ##e^{-\beta E_k}##?

The impossibility of giving such a device proves that defining the meaning of observables and of (accurate) measurements is a thoroughly theoretical process, not just one of experimentation!
Hm, that's not so easy. In principle you can measure it by looking at the emission spectrum of the gas (of course the temperature should be large enough so that the higher than ground states are populated). The relative strengths of different lines is governed by the Boltzmann distribution.
 
  • #98
vanhees71 said:
it's well known that physics is "circular" in this way.
But theoretical physics does not need to be circular; one can have a good theory with a noncircular interpretation in terms of experiments.

While one is still learning about the phenomena in a new theory, circularity is unavoidable. But once things are known for some time (and quantum physics is known in this sense for a very long time) the theory becomes the foundation and physical equipment and experiments are tested for quality by how well they match the theory. Even the definitions of units have been adapted repeatedly to better match theory!

vanhees71 said:
Hm, that's not so easy. In principle you can measure it by looking at the emission spectrum of the gas
But this gives you energy differences, not energy levels. This does not even closely resemble Born's rule!
Moreover, it is a highly nontrivial problem in spectroscopy to deduce from a collection of measured spectral lines the energy levels! And it cannot be done for large molecules over an extended energy range, let alone for a brick of iron.

vanhees71 said:
The relative strengths of different lines is governed by the Boltzmann distribution.
No. It depends also on selection rules and how much they are violated in a particular case. It is quite complicated.

vanhees71 said:
I'm not familiar with all applications of homodyne measurements. Before I can comment on this, please give a definition of what experiment you precisely have in mind.
I mentioned everything necessary. To approximately measure the two quadratures of photons in a beam one passes them through a symmetric beam splitter and then measured the resulting superposition of photons in the two beams by a homodyne detection on each beam. Details are for example in a nice little book by Ulf Leonhardt, Measuring the quantum state of light. This is used in quantum tomography; the link contains context and how the homodyne detection enters.
 
Last edited:
  • #99
vanhees71 said:
Well, in physics you don't want a self-contradictory description. The collapse hypothesis is incompatible with the very foundations of physics, i.e., the causality structure of relativistic spacetime. So why should you assume such a thing? I don't know since I don't know a single example of a real-world experiment, where this assumption is really needed.

I won't use the world collapse form now on, It has meanings that I don't intend. It is also very bad terminology. Let's use the following language from now on, We prepare a system in a state, described as ##|\psi_{in}>##. The system was measured to be in a state described as ##|\psi_{out}>## with a probability ##<\psi_{in}|\psi_{out}>^2##, When we use apparatus where we destroy the particle the appropriate clarification must be made. The wave function is our description of the system. What ##|\psi_{in}>## and ##|\psi_{out}>## are depend on the details of the experimental apparatus.

This must be non controversial to both of us.(Right?)
 
Last edited:
  • #100
Prathyush said:
I won't use the world collapse form now on, It has meanings that I don't intend. It is also very bad terminology. Let's use the following language from now on, We prepare a system in a state, described as ##|\psi_{in}>##. The system was measured to be in a state described as ##|\psi_{out}>## with a probability ##<\psi_{in}|\psi_{out}>^2##, When we use apparatus where we destroy the particle the appropriate clarification must be made. The wave function is our description of the system. What ##|\psi_{in}>## and ##|\psi_{out}>## are depend on the details of the experimental apparatus.
That's not how I would describe things. First off, I would not use the term "measured"; I would rather refer to "state preparation" and "state detection". In the case of detection, it is an eigenvalue of ##a## in representation ##A## chosen by the apparatus that is detected. But we must also take into account the role of the detection apparatus, since the detection process is one of interaction.

The "scattering amplitude" for the interaction is then ##<\psi_i,\phi_i|\psi_f,\phi_f>## where ##\psi_i, \psi_f## are the initial (prepared) and final states of the system that is detected and ##\phi_i, \phi_f## are the initial and final states of the detection apparatus. The detected value ##a## is then interpreted from the change to the apparatus as a function of ##\phi_i## and ##\phi_f## with probability given by the square modulus of the scattering amplitude. In the case that the change in the apparatus is sufficiently small (##\phi_f\approx \phi_i##) and ##\psi_f = \psi_a## is the eigenstate of ##A## with eigenvalue ##a## and then we would have that ##|<\psi_f |\psi_a>|^2## is an approximation to the probability of finding the state ##a##.
 
Last edited:
  • #101
A. Neumaier said:
The quantum formalism is independent of knowledge. Subjective issues have no place in physics, except for judging the adequacy of the assumptions and approximations made.

A measurement of a microscopic system is a reading from a macroscopic device that contains information about the state of the microscopic system. The nature of the coupling and the dynamical analysis must tell which information is encoded in the measurement result, to which accuracy, and with which probabilities.

This definition of a measurement is operationally checkable since one can prepare the states and read the measurement results and can thus compare the theory with the calculations without any ambiguity of concepts.

The only interpretation needed is how the reading from the macroscopic device is related to its macroscopic properties. In the thermal interpretation, this poses no problem at all. The consequences for the microscopic theory are then a matter of deduction, not one of postulation.

Whereas Born's rule is very incomplete in that it doesn't say the slightest thing about what constitutes a measurement, so it is an uncheckable piece of philosophy not of science, unless you know already what measurement means. But this requires knowing a lot of quantum physics that goes into building high quality measurement devices for quantum objects. Thus foundations based on Born's rule are highly circular - unlike foundations based on a properly understood statistical mechanics approach.
AFAICS what you call "a properly understood statistical mechanics approach" doesn't seem to say much more about what constitutes a measurement(at least anything different from the classical measurements with commuting observables that classical statistical mechanics addresses) than the Born's postulate. Furthermore you blur any additional hint by declaring the ambiguity between classical and quantum uncertainty exploited for a statistical mechanics interpretation as something subjective and out of the formalism, so I honestly can't see how this approach improves on the Born's rule for elucidating the nature of the quantum uncertainty and measurements.
 
  • #102
RockyMarciano said:
AFAICS what you call "a properly understood statistical mechanics approach" doesn't seem to say much more about what constitutes a measurement(at least anything different from the classical measurements with commuting observables that classical statistical mechanics addresses) than the Born's postulate. Furthermore you blur any additional hint by declaring the ambiguity between classical and quantum uncertainty exploited for a statistical mechanics interpretation as something subjective and out of the formalism, so I honestly can't see how this approach improves on the Born's rule for elucidating the nature of the quantum uncertainty and measurements.
A measurement of a system is a reading of a macroscopic variable from a macroscopic device that (due to the unitary quantum dynamics of the universe) provides information about the state of the system. This is a very clear and natural definition of measurement, valid both in the classical and the quantum regime. If the quantum dynamics is sufficiently well analyzed, one can infer from a precise protocol on how the reading is done (which might even involve some computation) and a theoretical model of system and device what is observed and how accurate it is.

For a macroscopic variable, the measured value is (to several decimal digits of relative accuracy) the expectation value of the corresponding observable (in quantum mechanics, Hermitian operator). This is the "properly understood statistical mechanics approach", and is one of the interpretative principles stated by the authors of the papers under discussion. Actually, together with the above definition of a measurement, this is the only piece of interpretation needed and defines everything. (A slightly more precise version of this statement is the content of my thermal interpretation of quantum mechanics.)

Given the above, everything can be analyzed in principle, without any ambiguity or circularity. Indeed, this is the very reason why ''shut up and calculate'' works!


Careless reading of a measurement value that could give rise to subjective uncertainty is not part of physics, but figures under lack of ability to qualify as an observer.

In the above scheme, nothing at all needs to be assumed about any commuting properties, any eigenvalues, or any probabilities; Borns rule doesn't enter the picture. All this doesn't matter, except to get closed form results in some exactly solvable toy problems.

In contrast, if you start with Born's rule it doesn't give you the slightest idea of what a measurement is, how the measurement result would appear in a pointer position to be read, say, or what is the objective and what the subjective part in making a measurement. Everything is left completely vague.
 
Last edited:
  • #103
Prathyush said:
it is also possible that you have not made a sufficiently clear and compelling argument. I too find what you are saying to be in need of clarification.
Is the clarification given in posts #85, #87, and #102 sufficient for you? Or what else needs to be clarified?
 
  • #104
A. Neumaier said:
A measurement of a system is a reading of a macroscopic variable from a macroscopic device that (due to the unitary quantum dynamics of the universe) provides information about the state of the system. This is a very clear and natural definition of measurement, valid both in the classical and the quantum regime. If the quantum dynamics is sufficiently well analyzed, one can infer from a precise protocol on how the reading is done (which might even involve some computation) and a theoretical model of system and device what is observed and how accurate it is.

For a macroscopic variable, the measured value is (to several decimal digits of relative accuracy) the expectation value of the corresponding observable (in quantum mechanics, Hermitian operator). This is the "properly understood statistical mechanics approach", and is one of the interpretative principles stated by the authors of the papers under discussion. Actually, together with the above definition of a measurement, this is the only piece of interpretation needed and defines everything. (A slightly more precise version of this statement is the content of my thermal interpretation of quantum mechanics.)

Given the above, everything can be analyzed in principle, without any ambiguity or circularity. Indeed, this is the very reason why ''shut up and calculate'' works!


Careless reading of a measurement value that could give rise to subjective uncertainty is not part of physics, but figures under lack of ability to qualify as an observer.

In the above scheme, nothing at all needs to be assumed about any commuting properties, any eigenvalues, or any probabilities; Borns rule doesn't enter the picture. All this doesn't matter, except to get closed form results in some exactly solvable toy problems.

In contrast, if you start with Born's rule it doesn't give you the slightest idea of what a measurement is, how the measurement result would appear in a pointer position to be read, say, or what the objective and subjective part in making a measurement is. Everything is left completely vague.
If the same measurement definition as in the classic case is valid then an explanation should be included about why the predictions of the measurements are no longer deterministic in principle and also about why the probabilities are computed differently from those of classical measurements, how is this addressed?
 
  • #105
RockyMarciano said:
If the same measurement definition as in the classic case is valid then an explanation should be included about why the predictions of the measurements are no longer deterministic in principle and also about why the probabilities are computed differently from those of classical measurements, how is this addressed?
The reason is simply that the same definition does not imply the same results if the dynamical rules to which the definition applies are different.

Moreover, even classically, measurements are often not predictable over a significant time scale. Classical Brownian motion (a dust particle in a fluid) is intrinsically undetermined, classically, since the initial state cannot be known to the accuracy required.
 

Similar threads

Replies
3
Views
1K
  • Quantum Physics
Replies
0
Views
67
  • General Discussion
2
Replies
54
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
48
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
49
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
  • Quantum Interpretations and Foundations
Replies
2
Views
778
Replies
13
Views
5K
Back
Top