A Evaluate this paper on the derivation of the Born rule

Click For Summary
The discussion revolves around the evaluation of the paper "Curie Weiss model of the quantum measurement process" and its implications for understanding the Born rule in quantum mechanics. Participants express interest in the authors' careful analysis of measurement processes, though some raise concerns about potential circular reasoning in deriving the Born rule from the ensemble interpretation of quantum mechanics. The conversation highlights the relationship between state vectors, probability, and the scalar product in Hilbert space, emphasizing the need for a clear understanding of measurement interactions. There is also skepticism regarding the applicability of the model to real experimental setups, with calls for more precise definitions and clarifications of the concepts involved. Overall, the discourse reflects a deep engagement with the complexities of quantum measurement theory.
  • #61
Prathyush said:
If you can construct an experiment that can interfere between dead and alive states of a cat you will realize what stevendaryl is saying is correct. However in practice it is impossible to do so.
I disagree. I would claim that the the "probability amplitudes" have exactly the same meaning (an abstract probability that relates to the relative frequencies over an infinite number of identical experiments) whether you make 1, 10, 100 or 0 measurements, with electrons or cats.

In the unfortunate language of "collapse" I am saying that it is just as accurate (or no less inaccurate!) to say it takes place at production as at detection.
 
Last edited:
Physics news on Phys.org
  • #62
mikeyork said:
I disagree. I would claim that the the "probability amplitudes" have exactly the same meaning (an abstract probability that relates to the relative frequencies over an infinite number of identical experiments) whether you make 1, 10, 100 or 0 measurements, with electrons or cats.

In the unfortunate language of "collapse" I am saying that it is just as accurate (or no less inaccurate!) to say it takes place at production as at detection.
I don't understand at all what you are saying, Collapse is not a physical process. Wavefunction is our description of the system, that is all.
 
  • #63
Prathyush said:
I don't understand at all what you are saying, Collapse is not a physical process. Wavefunction is our description of the system, that is all.
I am saying that probability amplitudes have the same meaning whether any measurements are made or not. To say that spin has certain probabilities of being up or down is not the same as saying it is neither.
 
  • #64
mikeyork said:
I am saying that probability amplitudes have the same meaning whether any measurements are made or not. To say that spin has certain probabilities of being up or down is not the same as saying it is neither.
Probability amplitudes when squared talk about the probabilities of measurements. That is the only way we can use them. You may disagree, but if you want to discuss this point please start a separate thread.
 
  • #65
Prathyush said:
Probability amplitudes when squared talk about the probabilities of measurements. That is the only way we can use them. You may disagree, but if you want to discuss this point please start a separate thread.
No, I don't disagree at all. You just don't have to make a measurement for them to have that meaning.
 
  • #66
The word "collapse" was never used by the founders of quantum theory.

If you look at Feynman's lectures on physics volume 3, you will find exactly zero mentions of that word.

It just isn't proper terminology, and seems to stem from a misunderstanding of what the wave function is.
 
  • #67
Ddddx said:
The idea of "collapse" was never used by the founders of quantum theory.

If you look at Feynman's lectures on physics volume 3, you will find exactly zero mentions of that word.

It just isn't proper terminology, and seems to stem from a misunderstanding of what the wave function is.

The word collapse should not be used. It should simply be called measurement.
 
  • #68
Prathyush said:
I have encountered this paper "Curie Wiess model of the quantum measurement process". https://arxiv.org/abs/cond-mat/0203460

Another work by the same authors is "Understanding quantum measurement from the solution of dynamical models" https://arxiv.org/abs/1107.2138

I am still evaluating the papers. I find the general lessons implied to be interesting and probably compelling.

In the context of the model studied is this paper accurate? What do you think about the overarching viewpoint presented by the authors?
I mentioned this work several time at physicsforums (see, e.g., https://www.physicsforums.com/threa...-local-realism-ruled-out.689717/#post-4372139 )

I believe this is outstanding work, although I cannot check their calculations. I would emphasize the following: 1. They show that the Born rule can be derived from unitary evolution as an approximate, rather than a precise result; 2. The contradiction between unitary evolution and definite outcomes of measurements can be overcome to some extent: the reversal of definite outcomes takes a very large time (Poincare reversal time).
 
  • #69
vanhees71 said:
You can load QT (as any mathematical model of reality) with some philosophical (not to call it esoterical) questions like, why we always measure eigenvalues of self-adjoint operators, but physics doesn't answer why a mathematical model works, it just tries to find through an interplay between measurements and mathematical reasoning such models that describe nature (or even more carefully formulated what we observe/measure in nature) as good as possible.

The purpose of my investigation is to understand the mechanics of measurement, why do measurement apparatus do what they appear to do. Consider a cloud chamber, we understand exactly how it is constructed. Take water molecules do so and so things to it, and we can construct it. We know that upon the interaction with a charged it turns cloudy and in turn we obtain information about its position. Now I want to understand exactly why this happens. Clearly the situation involves the need to describe the cloud chamber using statistical ensembles. The location of the cloud is related to the location of the charged particle. However water molecules are difficult to describe. Can one distil the essence of such a problem into a model. From such a investigation it seems highly compelling to me that Born's rule can be understood from dynamics.
 
Last edited:
  • #70
stevendaryl said:
I don't think that's true. I should say more definitely: it is not true. Superpositions are not a matter of descriptive choices. To say that an electron is in a superposition \alpha |u\rangle + \beta |d\rangle implies that a measurement of the spin along axis \vec{a} will yield spin-up with a probability given by (mumble..mumble---I could work it out, but I don't feel like it right now). So there is a definite state \alpha |u\rangle + \beta |d\rangle, and it has a definite meaning. It's not just a matter of descriptions.
Your axis \vec{a} is a descriptive choice. The probabilities you get are dependent on that choice. Choose a different axis and you'll get different probabilities.
 
  • #71
stevendaryl said:
I'm saying (actually, I did say) that an electron that is in a superposition of spin-up and spin-down is neither spin-up nor spin-down until we measure it. What this implies about dead cats is complicated.
According to standar QT in this case the spin-##z## component is indetermined, and it's not indetermined, because we don't know it but it's really indetermined. All that is known about ##\sigma_z## are the probabilities to find the two possible values when measuring the observable ##\sigma_z##. That's it. There's no more according to QT.
 
  • #72
Prathyush said:
I can construct a detailed experiment, but that would require time. Would you agree with the following statement, when a measurement is performed, the state of the system(meaning information available to us about it) in general changes to reflect the outcome of the measurement.
In the Schrödinger picture the Statistical Operator changes according to the full Hamiltonian of the system. The Hamiltonian must include the interaction with the complete setup. That's all there is, and it's as well valid in classical physics. The only difference is that in classical physics, where you measure macroscopic observables you can make the influence of the measurement apparatus arbitrarily small. This is not the case when you measure microscopic observables. E.g., to measure the electric field of an electron there's no test charge to do so because any charge is at least as big as the electron charge.
 
  • #73
Prathyush said:
I don't understand at all what you are saying, Collapse is not a physical process. Wavefunction is our description of the system, that is all.
Well, in physics you don't want a self-contradictory description. The collapse hypothesis is incompatible with the very foundations of physics, i.e., the causality structure of relativistic spacetime. So why should you assume such a thing? I don't know since I don't know a single example of a real-world experiment, where this assumption is really needed.
 
  • #74
Prathyush said:
This is clearly a question that I haven't thought about in depth, this discussion was extremely fruitful to me because it brought these issues into the forefront.
Chapters 7-10 in my online book derive everything without assuming Born's rule anywhere.
 
  • #75
Well in Chpt. 8 it's just QT in the ##C^*##-algebra formulation. You don't talk about probabilities but about expectation values etc. So just not mentioning the word "probability" doesn't mean that you don't use probability theory.
 
  • #76
vanhees71 said:
define the expectation value of an observable, represented by a self-adjoint operator ##\hat{A}## by
$$\langle A \rangle_{\rho}=\mathrm{Tr}(\hat{\rho} \hat{A}).$$
This is Born's rule.
How can a mathematical definition say anything about the interpretation?

The formula you quote is just part of shut up and calculate. Interpretation enters only if you want to give the expectation value a meaning in terms of measurement.

Standard quantum theory consists of two parts:
  • (S) the shut up and calculate part, which just derives mathematical consequences of definitions, and
  • (I) the interpretation of the formulas from the shut up and calculate part in terms of the real world.
Calling trace##\rho A## the expectation of ##A## and denoting it by ##\langle A\rangle## belong to (S). All rules and results used in statistical mechanics to deduce consequences from it also belong to (S). Only telling what ##\langle A\rangle## should mean belongs to (I). In particular, the shut up and calculate part gets different interpretations depending on how one interprets ##\langle A\rangle##. As equilibrium thermodynamics shows, an interpretation in terms of an average over real measurements is not warranted for macroscopic systems where usually only a single measurement is made and the averaging becomes vacuous. Instead, the standard interpretation of ##\langle A\rangle## in any textbook of statistical thermodynamics (in particular the famous book by Callen) is to equate it with the measured macroscopic value since this identification (and only this) allows one to deduce equilibrium thermodynamics from statistical physics.

vanhees71 said:
what's measured on macroscopic systems usually are indeed very "coarse-grained observables",
In the derivation of equilibrium thermodynamics from statistical physics, coarse graining is never used.

vanhees71 said:
As far as I know, the definition of POVM measurements relies also on standard quantum theory, and thus on Born's rule (I've read about them in A. Peres, Quantum Theory: Concepts and Methods). It just generalizes "complete measurements" by "incomplete ones". It's not outside the standard rules of quantum theory.

1. The POVM formalism belongs to (S); the interpretation in terms of measurement of physical processes in the Lab belongs to (I). Clearly, Born's rule is only an extremal case of the POVM interpretation.

2. POVM's can be analyzed in terms of Born's rule only by going to a fictitious bigger Hilbert space and defining there a new dynamics. This is not the dynamics that one gets naturally from the given system.

3. Even though it can be derived from Born's rule in the above way, a measurement by means of a POVM is not governed itself by Born's rule. You seem to equate everything that can somehow be derived from Born's rule wit Born's rule itself. But this is a severe abuse of language.

For example, homodyne photon measurement measures both the frequency and the phase of a photon, though both are noncommuting variables. This has nothing at all to do with the kind of measurement following the Born rule.
 
Last edited:
  • #77
vanhees71 said:
Well, usual statistics textbook start with equilibrium distributions and define, e.g., the grand canonical operator
$$\hat{\rho}=\frac{1}{Z} \exp[-\beta (\hat{H}-\mu \hat{Q})],$$
to evaluate expectation values using Born's rule, leading to the Fermi- and Bose-distribution functions.

No. these expectation values (of macroscopic quantities) are evaluated with shut up and calculate, not with Born's rule! The results obtained in equilibrium are then identified with the thermodynamic values measured. Nowhere an interpretation in terms of Born's rule enters.

To apply Born's rule to ##\langle H\rangle##, say, one would have to measure astronomically many spectral lines, then do a numerical analysis to extract the energy levels (doing the calculations with a horrendous number of digits to be able to resolve them reliably), and then perform an average over the astronomically large number of energy levels. This is completely ridiculous.

Instead, only as many measurements are performed as there are thermodynamic degrees of freedom, and these are compared with the formulas obtained by shut up and calculate.

vanhees71 said:
Classical statistical mechanics is also based on probability concepts since Boltzmann & Co. I don't know, which textbooks you have in mind!
Boltzmann worked with an ideal gas, where one can apply statistical reasoning (though not Born's rule) by averaging over independent atoms. But it works only there!

In his famous textbook from 1901 where the grand canonical ensemble was introduced, Gibbs never averages over atoms, but over ensembles of macroscopic systems! He was well aware that his ensembles were fictitious ones, made of imagined copies of the macroscopic system at hand, needed to justify the application of statistical concepts to single cases. At his time, mathematics wasn't yet as abstract as today where one can use any mathematical concept as a tool in quite diverse applications where the same mathematical notion has completely different uses and interpretations as long as the axiomatically defined postulates are satisfied. Thus he had to take recourse to a fictitious average where today just a reference to shut up and calculate suffices.

As physics cannot depend on imagined but unperformed experiments, it is clear that his expectations are not averages over many experiments but refer to the single case at hand.
 
  • #78
Prathyush said:
when a measurement is performed, the state of the system(meaning information available to us about it) in general changes to reflect the outcome of the measurement.
When a measurement is performed, the state of the detector changes to a state encoding the measurement result. What happens to the tiny system depends a lot on what it is and how it is measured; for example when measuring a photon it is usually absorbed and no longer exists after the measurement.

The paper in the OP analyzes a very special situation where the measurement is von-Neumann, so that the state after the measurement is an eigenstate corresponding to the measured eigenvalue.
 
  • Like
Likes dextercioby
  • #79
Ddddx said:
The word "collapse" was never used by the founders of quantum theory.
Von Neumann introduced the concept in 1932 under a different name; he called it state vector reduction. The name is not that relevant. What actually happens is.
 
Last edited:
  • #80
vanhees71 said:
Well in Chpt. 8 it's just QT in the ##C^*##-algebra formulation. You don't talk about probabilities but about expectation values etc. So just not mentioning the word "probability" doesn't mean that you don't use probability theory.
By the same reasoning, not mentioning the words coordinates in abstract differential geometry would not mean that you don't use coordinates. The point is that coordinates are not unique, and the meaning of them depends on making choices. Thus not using coordinates is a virtue, and really means that no coordinates are used.

Similarly, there is no probability theory involved in C^*-algebras - nowhere in any definition or result. Probability is not defined until you choose a representation in a separable Hilbert space and an orthonormal basis in it, and it is basis-dependent, whereas the C^*-algebra approach is basis independent. Which choice is the correct one is one of the problems making up the measurement problem that you so despise. But for statistical mechanics one never needs to make a choice of basis as the results are all basis independent. So probability never enters.
 
  • #81
A. Neumaier said:
No. these expectation values (of macroscopic quantities) are evaluated with shut up and calculate, not with Born's rule! The results obtained in equilibrium are then identified with the thermodynamic values measured. Nowhere an interpretation in terms of Born's rule enters.

To apply Born's rule to ##\langle H\rangle##, say, one would have to measure astronomically many spectral lines, then do a numerical analysis to extract the energy levels (doing the calculations with a horrendous number of digits to be able to resolve them reliably), and then perform an average over the astronomically large number of energy levels. This is completely ridiculous.

Instead, only as many measurements are performed as there are thermodynamic degrees of freedom, and these are compared with the formulas obtained by shut up and calculate.Boltzmann worked with an ideal gas, where one can apply statistical reasoning (though not Born's rule) by averaging over independent atoms. But it works only there!

In his famous textbook from 1901 where the grand canonical ensemble was introduced, Gibbs never averages over atoms, but over ensembles of macroscopic systems! He was well aware that his ensembles were fictitious ones, made of imagined copies of the macroscopic system at hand, needed to justify the application of statistical concepts to single cases. At his time, mathematics wasn't yet as abstract as today where one can use any mathematical concept as a tool in quite diverse applications where the same mathematical notion has completely different uses and interpretations as long as the axiomatically defined postulates are satisfied. Thus he had to take recourse to a fictitious average where today just a reference to shut up and calculate suffices.

As physics cannot depend on imagined but unperformed experiments, it is clear that his expectations are not averages over many experiments but refer to the single case at hand.
I think that's just fight about semantics, what you call Born's rule. For me it's the probabilistic interpretation of the state. Usually it's formulated for pure states and then argued for the more general case of mixed states. The upshot is that you can describe the state as a statistical operator ##\hat{\rho}## with the meaning in terms of probabilities given in one of my postings above:
$$P(a)=\sum_{\beta} \langle a,\beta|\hat{\rho}|a,\beta \rangle.$$
That you can identify ##\langle A \rangle## with "the measured value" for macroscopic systems is due to the fact that ##\langle A \rangle## is an observable like, e.g., the center of mass position of some object or a fluid cell and that such observables tend to be sharply peaked around the average value. Of course, a single measurement doesn't tell anything, as everybody learns in the introductory lab in any physics curriculum. What you call "measurement" is indeed not formalized in theory but determined by concrete experimental setups in the lab and real-world measurement devices like detectors.

Of course, you can extend this ideal picture of precisely measured quantities also for microscopic observables with the more general case of incomplete measurements which is formalized as the POVM formalism, but that's finally also based on the fundamental postulates, including Born's rule (at least in the way it's introduced by Peres in his book).
 
  • #82
A. Neumaier said:
By the same reasoning, not mentioning the words coordinates in abstract differential geometry would not mean that you don't use coordinates. The point is that coordinates are not unique, and the meaning of them depends on making choices. Thus not using coordinates is a virtue, and really means that no coordinates are used.

Similarly, there is no probability theory involved in C^*-algebras - nowhere in any definition or result. Probability is not defined until you choose a representation in a separable Hilbert space and an orthonormal basis in it, and it is basis-dependent, whereas the C^*-algebra approach is basis independent. Which choice is the correct one is one of the problems making up the measurement problem that you so despise. But for statistical mechanics one never needs to make a choice of basis as the results are all basis independent. So probability never enters.
Well, we talk about physics not pure mathematics, and you need a rule (called "interpretation") to relate your formalism to what's measured in the real world. This is done by deciding which observable you measure and this determines the basis you have to use to calculate the corresponding probabilities. The ##C^*## formalism is, as far as I can see, equivalent to the standard definition of QT with the advantage to give a more clear mathematical determination of the operator algebra.
 
  • #83
vanhees71 said:
I think that's just fight about semantics, what you call Born's rule. For me it's the probabilistic interpretation of the state.
Then you should say the latter whenever you want to say the former, or you will definitely earn misunderstanding. For the two are far from synonymous. The standard semantics is the one described by wikipedia; nobody apart from you has this far too general usage you just announced. In particular, equating the two is meaningless in the present context - this thread is about deriving Born's rule from statistical mechanics, not about deriving the probabilistic interpretation of quantum mechanics.

vanhees71 said:
Well, we talk about physics not pure mathematics, and you need a rule (called "interpretation") to relate your formalism to what's measured in the real world. This is done by deciding which observable you measure and this determines the basis you have to use to calculate the corresponding probabilities.
Which observable is measured in homodyme photon detection, the example mentioned before?

Moreover, your recipe (calculate the corresponding probability) only works in simple cases where you have an exactly solvable system, hence can evaluate the partition function as a sum over joint eigenstates. But the latter is just one possible way of organizing the computations (shut up and calculate - no interpretation is needed to express the trace as a sum over eigenvalues) and fails in more complex situations.

In equilibrium thermodynamics one wants to measure the total mass of each chemical component (which may be a complex molecule) and the total energy of a macroscopic interacting system. In these cases on never calculates the thermodynamic equation of state in terms of probabilities. Instead one uses mean field approximations and expansions beyond, as you know very well!

In general, a partition sum is just a piece of shut uup and calculate, as it is a mathematically defined expression valid without any interpretation. The interpretation is about relating the final results (the equation of state) to experiments, and this does not involve probabilities at all; it is done simply by equating the expectation of a macroscopic variable with the measured value. Thus this is the true interpretation rule used for macroscopic measurement. Everything else (talk about probabilities, Born's rule, etc.) doesn't enter the game anywhere (unless you want to complicate things unnecessarily, which is against one of the basic scientific principles called Ockham's razor).
 
Last edited:
  • #84
vanhees71 said:
The C^* formalism is, as far as I can see, equivalent to the standard definition of QT with the advantage to give a more clear mathematical determination of the operator algebra.
It is equivalent on the shut up and calculate level, but has a definite advantage of clearness not only on the conceptual but also on the interpretational level. It dispenses with Born's rule, the philosophically problematic concept of probability, and the choice of basis, except when a concrete experiment singles out a concrete basis.

Another advantage is that it directly works with mixed states, which are the by far most common states in Nature, and avoids its decomposition
vanhees71 said:
The upshot is that you can describe the state as a statistical operator ##\hat{\rho}## with the meaning in terms of probabilities given in one of my postings above:
$$P(a)=\sum_{\beta} \langle a,\beta|\hat{\rho}|a,\beta \rangle.$$
which is completely unphysical since the pieces in the sum are far from unique and therefore cannot have a physical interpretation. Different decompositions are physically undistinguishable!
 
Last edited:
  • #85
A. Neumaier said:
I refereed the paper in question here.
I added several paragraphs to the review, summarizing what was actually derived, and pointing out the declared interpretative assumptions of the authors of the paper mentioned in the OP. These assumptions were made explicit in a much later paper, namely:

A.E. Allahverdyan, R. Balian and T.M. Nieuwenhuizen,
A sub-ensemble theory of ideal quantum measurement processes,
Annals of Physics 376 (2017): 324-352.
https://arxiv.org/abs/1303.7257

The traditional motivational introduction to statistical mechanics is based on mixing an ensemble of pure states. However, none of the results of statistical mechanics depends on this motivational prelude. Instead of assuming statistical mechanics without spelling out the precise assumptions made (as in the paper mentioned in the OP and in the big 200 page article with all formal details) - which might suggest that their derivation depends on the traditional foundations -, the authors are here far more explicit about the assumptions necessary to get their results.

They take in this quite recent paper statistical mechanics as a purely formal theory (i.e., in the shut up and calculate mode) and then give new interpretational principles for how this formalism is to be interpreted. In particular, their interpretational principles are independent of Born's rule (as a statement about measurement). As a consequence, the derivation of Born's rule is a result, not a silent assumption. For the present discussion, the most relevant statements from this paper are (emphasis by the authors, but notation for the density operator adapted to the present context):

Allahverdyan Balian and Nieuwenhuizen said:
One should therefore, as done for q-bits, distinguish ##\langle\hat O\rangle=##tr##\rho\hat O## from an ordinary expectation value by denominating it as a
``q-expectation value''. Likewise, a ``q-correlation'', the q-expectation value of a product of two observables, should not be confused with an
ordinary correlation. Also, the q-expectation value ##\langle\hat\pi\rangle## of a projection operator ##\hat\pi## is not an ordinary probability, but a formal object which we will call ``q-probability'' rather than ``probability''. Born's rule is not postulated here, it will come out (Subsec. 6.4) as a property of the apparatus at the issue of an ideal measurement.

Allahverdyan Balian and Nieuwenhuizen said:
Interpretative principle 1. If the q-variance of a macroscopic observable is negligible in relative size its q-expectation value is identified with the value of the corresponding macroscopic physical variable, even for an individual system.

These statements exactly match the assumptions made in my thermal interpretation of quantum mechanics.

By the way, they cite Bell's theorem as a main reason why one cannot simply equate the q-expectations with expectation values in the classical sense since some of the properties of expectations valid in the classical case fail to hold in the quantum case.
 
Last edited:
  • #86
A. Neumaier said:
How can a mathematical definition say anything about the interpretation?
This is an interesting question. Most physicists seem to think it does say everything about the interpretation, in other words the formalism is not neutral as the "shut up and calculate" lemma seems to imply.
By your posts I infer that what you think must inform the interpretation is the macroscopic measurement, but this thread's discussion seems to go in circles, because the formalsm doesn't seem to distinguish clearly between uncertainties derived from classical lack of knowledge and from inherent impossiblity of the theory. So the important distinction between statistics and probabilities that has been made here cannot be resolved by the formalism. But just going to the statistical mechanics interpretation seems to lack a new definition of measurement, or I can't see the improvement over the basis-dependent probabilities, how is the macroscopic measurement of the specific experiment or observation connected to the formalism in a basis-independent way not relyng on the Born rule?.

Also a starting point here seems to be that the Born's rule is just a postulate about probabilities, not acknowledging that the key feature of the rule is that there is a differentiating element with respect to the usual probabilities that is also passed over by the formalism.
 
  • #87
RockyMarciano said:
the formalsm doesn't seem to distinguish clearly between uncertainties derived from classical lack of knowledge and from inherent impossiblity of the theory.
The quantum formalism is independent of knowledge. Subjective issues have no place in physics, except for judging the adequacy of the assumptions and approximations made.
RockyMarciano said:
But just going to the statistical mechanics interpretation seems to lack a new definition of measurement
A measurement of a microscopic system is a reading from a macroscopic device that contains information about the state of the microscopic system. The nature of the coupling and the dynamical analysis must tell which information is encoded in the measurement result, to which accuracy, and with which probabilities.

This definition of a measurement is operationally checkable since one can prepare the states and read the measurement results and can thus compare the theory with the calculations without any ambiguity of concepts.

The only interpretation needed is how the reading from the macroscopic device is related to its macroscopic properties. In the thermal interpretation, this poses no problem at all. The consequences for the microscopic theory are then a matter of deduction, not one of postulation.

Whereas Born's rule is very incomplete in that it doesn't say the slightest thing about what constitutes a measurement, so it is an uncheckable piece of philosophy not of science, unless you know already what measurement means. But this requires knowing a lot of quantum physics that goes into building high quality measurement devices for quantum objects. Thus foundations based on Born's rule are highly circular - unlike foundations based on a properly understood statistical mechanics approach.
 
  • Like
Likes Mentz114
  • #88
A. Neumaier said:
Then you should say the latter whenever you want to say the former, or you will definitely earn misunderstanding. For the two are far from synonymous. The standard semantics is the one described by wikipedia; nobody apart from you has this far too general usage you just announced. In particular, equating the two is meaningless in the present context - this thread is about deriving Born's rule from statistical mechanics, not about deriving the probabilistic interpretation of quantum mechanics.Which observable is measured in homodyme photon detection, the example mentioned before?

Moreover, your recipe (calculate the corresponding probability) only works in simple cases where you have an exactly solvable system, hence can evaluate the partition function as a sum over joint eigenstates. But the latter is just one possible way of organizing the computations (shut up and calculate - no interpretation is needed to express the trace as a sum over eigenvalues) and fails in more complex situations.

In equilibrium thermodynamics one wants to measure the total mass of each chemical component (which may be a complex molecule) and the total energy of a macroscopic interacting system. In these cases on never calculates the thermodynamic equation of state in terms of probabilities. Instead one uses mean field approximations and expansions beyond, as you know very well!

In general, a partition sum is just a piece of shut uup and calculate, as it is a mathematically defined expression valid without any interpretation. The interpretation is about relating the final results (the equation of state) to experiments, and this does not involve probabilities at all; it is done simply by equating the expectation of a macroscopic variable with the measured value. Thus this is the true interpretation rule used for macroscopic measurement. Everything else (talk about probabilities, Born's rule, etc.) doesn't enter the game anywhere (unless you want to complicate things unnecessarily, which is against one of the basic scientific principles called Ockham's razor).
In the Wikipedia article in the first few lines they give precisely the definition, I gave some postings above. I'm using the standard terminology, while you prefer to deviate from it so that we have to clarify semantics instead of discussing physics.

In homodyne detection what's measured are intensities as in any quantum-optical measurement. I refer to Scully&Zubarai, Quantum Optics. One application is to characterize an input signal (em. radiation) (annihilation operator ##\hat{a}##) using a reference signal ("local oscillator", annihilation operator ##\hat{b}##). They are sent through a beam splitter with transmittivity ##T## and reflectivity ##R##, ##T+R=1##. The states at the two output channels are then defined by (I don't put hats on top of the operators from now on):
$$c=\sqrt{T} a + \mathrm{i} \sqrt{1-T} b, \quad d=\sqrt{1-T}a + \sqrt{T} b.$$
What's measured is the intensity at channel ##c##, i.e., ##c c^{\dagger}##.

If the local oscillator is in a coherent state ##|\beta_l \rangle## you get for the expectation value
$$\langle c^{\dagger} c \rangle=T \langle a^{\dagger} a \rangle + (1-T)|\beta_l|^2 - 2 \sqrt{T(1-T)} |\beta_l| \langle X(\phi_l+\pi/2)$$
with
$$X(\phi)=\frac{1}{2} (a \exp(-\mathrm{i} \phi)+a^{\dagger} \exp(\mathrm{i} \phi).$$
All this is done within standard QT using Born's rule in the above given sense. I don't see, which point you want to make with this example. It's all standard Q(F)T.

Now you switch to partition sums, i.e., thermodynamical systems. Take as an example black-body radiation (or any other ideal gas of quanta), i.e., a radiation field in thermal equilibrium with the walls of a cavity at temperature ##T=1/\beta##.
The statistical operator is
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H}).$$
The partition sum here is
$$Z=\mathrm{Tr} \exp(-\beta \hat{H}).$$
The Hamiltonian is given by (I use a (large) quantization volume ##V## with periodic boundary conditions for simplicity)
$$\hat{H}=\sum_{\vec{n} \in \mathbb{Z}^3, h \in \{\pm 1\}} \omega(\vec{p}) \hat{a}^{\dagger}(\vec{p},h) \hat{a}(\vec{p},h), \quad \vec{p} = \frac{2 \pi}{L} \vec{n}.$$
For the following it's convenient to evaluate the somewhat generalized partition function
$$Z=\mathrm{Tr} \exp(-\sum_{\vec{n},h} \beta(\vec{n},\lambda) \omega_{\vec{n}} \hat{N}(\vec{n},h).$$
Using the Fock states leads to
$$Z=\prod_{\vec{n},h} \frac{1}{1-\exp(-\omega_{\vec{n}} \beta(\vec{n},\lambda))}.$$
The thermodynamic limit is given by making the volume ##V=L^3## large:
$$\ln Z=-\sum_{\vec{n},h} \ln [1-\exp(-\omega_{\vec{n}} \beta(\vec{n},\lambda)]=-V \sum_{h} \int_{\mathbb{R}^3} \frac{\mathrm{d}^3 \vec{p}}{(2\pi)^3} \ln [1-\exp(-\beta(\vec{p},h) |\vec{p}|].$$
The spectrum (i.e., the mean number of photons per three-momentum) is calculated by
$$\langle N(\vec{p},h) \rangle=-\frac{1}{\omega_{\vec{p}}} \frac{\delta}{\delta \beta(\vec{p},h)} \ln Z=\frac{V}{\exp(\beta |\vec{p}|)-1}.$$
It's measured with help of a spectrometer (or with the Planck satellite for the cosmic microwave background).

It's all standard QT and uses, of course, Born's rule.
 
  • #89
A. Neumaier said:
Whereas Born's rule is very incomplete in that it doesn't say the slightest thing about what constitutes a measurement, so it is an uncheckable piece of philosophy not of science, unless you know already what measurement means. But this requires knowing a lot of quantum physics that goes into building high quality measurement devices for quantum objects. Thus foundations based on Born's rule are highly circular - unlike foundations based on a properly understood statistical mechanics approach.
No theory (also not classical mechanics or field theory) say "the slightest thing about what constitutes a measurement". Physical observables are defined by concrete measurement devices in the lab, not by a theoretical formalism. The theoretical formalism rather gives a mathematical description of such observations. As the name already tells, a statistical mechanics (or rather physics) approach, also uses probabilities in its foundations, or what else is statistics than applied probability theory?

Only with Born's rule the quantum formalism gets interpretible without contradictions with experience. It's not enough to give the other postulates (concerning the formal math describing quantum kinematics and dynamics).
 
  • #90
vanhees71 said:
In the Wikipedia article in the first few lines they give precisely the definition, I gave some postings above. I'm using the standard terminology, while you prefer to deviate from it so that we have to clarify semantics instead of discussing physics.
Well, when the concepts are not clear one must first clarify the semantics before one can communicate physics.

You give conflicting definitions of what you mean by the Born rule, but not all can be true. For example you said in post #27 that the definition ##\langle A\rangle =##tr ##\rho A## is Born's rule. Where in the first few lines of the Wikipedia article is this stated?
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 54 ·
2
Replies
54
Views
5K
Replies
48
Views
6K
Replies
58
Views
4K
Replies
31
Views
3K
Replies
47
Views
5K
  • · Replies 13 ·
Replies
13
Views
6K