I The thermal interpretation of quantum physics

  • #91
DarMM said:
I wouldn't say so. The thermal reservoir, the environment, is responsible for the stochastic nature of subsystems when you don't track the environment. However it doesn't guide them like the Bohmian potential, it's not an external object of a different class/type to the particles it's just another system. Also it's not universal, i.e. the environment is just whatever external source of noise is relevant for the current system, e.g. air in the lab, thermal fluctuations of atomic structure of the measuring device.
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpreation (if I finally understand it right as meant by @A. Neumaier) as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory (which would be needed for fields rather than particles, I'd guess).
 
  • Like
Likes DarMM
Physics news on Phys.org
  • #92
vanhees71 said:
Particularly for photons a classical-particle picture, as envisaged by Einstein in his famous 1905 paper on "light quanta", carefully titled as "a heuristic approach", is highly misleading. There's not even a formal way to define a position operator for massless quanta (as I prefer to say instead of "particles") in the narrow sense. All we can calculate is a probability for a photon to hit a detector at the place where this detector is located.
It's interesting that in Haag's book "Local Quantum Physics" and Steinmann's "Perturbative Quantum Electrodynamics and Axiomatic Field Theory" the notation of a detector operator or probe is introduced to give formal meaning to the particle concept. With an n-particle state being a state that can activate at most n such probes.
 
  • Like
Likes Peter Morgan, dextercioby and vanhees71
  • #93
AlexCaledin said:
- so, the TI seems just camouflaging Bohm's "guiding" , trying to ascribe it to the universal thermal reservoir, right?
vanhees71 said:
In addition the most important point in contradistinction to the Bohmian theory, which I think still convincingly only works in the non-relativistic approximation, the thermal interpretation as summarized in #84, there's no need for a Bohmian non-local description but one can use the standard description in terms of local relativistic QFTs without the need to develop a pilot-wave theory.
There are similarities and differences:

The thermal interpretation is deterministic, and the nonlocal multipoint q-expectations ignored in approximate calculations are hidden variables accounting for the stochastic effects observed in the coarse-grained descriptions of the preparation and detection processes.

But there are no additional particle coordinates as in Bohmian mechanics that would need to be guided; instead, the particle concept is declared to be an approximation only.
 
Last edited:
  • Like
Likes AlexCaledin
  • #94
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
Yes, assuming I'm right of course! :nb)

I would say the major difference is that the q-expectations ##\langle A\rangle## are seen as actual quantities, not averages of a quantity over an ensemble of results. So for instance ##\langle A(t)B(s) \rangle## isn't some kind of correlation between ##A(t)## and ##B(s)## but a genuinely new property. Also these properties are fundamentally deterministic, there is no fundamental randomness, just lack of control of the environment.
 
  • #95
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
 
  • #96
vanhees71 said:
Great! If this is indeed the correct summary of what is meant by "Thermal Interpretation", it's pretty clear that it is just a formalization of the usual practical use of Q(F)T in analyzing real-world observations.
It is intended to be precisely the latter, without the partially misleading probabilistic underpinning in the foundations that gave rise to nearly a century of uneasiness and dispute.
Part III said:
The thermal interpretation is inspired by what physicists actually do rather than what they say. It is therefore the interpretation that people actually work with in the applications (as contrasted to work on the foundations themselves), rather than only paying lipservice to it.
DarMM said:
Yes, assuming I'm right of course!
vanhees71 said:
I guess @A. Neumaier will tell us. Why the heck, hasn't he written this down in a 20-30p physics paper
It is partially right, but a number of details need correction. I have little time today and tomorrow, will reply on Sunday afternoon.
 
Last edited:
  • Like
Likes dextercioby, vanhees71 and DarMM
  • #97
No rush! I least I'm right to first order, I await the nonperturbative corrections!
 
  • Like
Likes vanhees71
  • #98
vanhees71 said:
The upshot of this long story is that the particle picture of subatomic phenomena is quite restricted.
Thank you for the long post. I am aware of what you wrote, but your summary is very good.
 
  • #99
vanhees71 said:
The meaning of your interpretation gets more and more enigmatic to me.

In the standard interpretation the possible values of observables are given by the spectral values of self-adjoint operators. To find these values you'd have to measure energy precisely. This is a fiction of course. It's even a fiction in classical physics, because real-world measurements are always uncertain, and that's why we need statistics from day one in the introductory physics lab to evaluate our experiments. Quantum theory has nothing to do with these uncertainties of real-world measurements.

At the same time you say the very same about measurements within your thermal interpretation I express it within the standard interpretation. As long as the meaning of q-averages is not clarified, I cannot even understand the difference of the statements. That's the problem.
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
  • all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
  • the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
  • In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
  • Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
  • But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.

Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.

What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!

As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.

The thermal interpretation has two advantages:
  • It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
  • It allows to make definite statements about each single quantum system, no matter how large or small it is.
 
  • #100
So what is a wavefunction?
 
  • #101
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:
  1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. This makes these objects highly "rich" in terms of properties for ##\langle\phi(t_1)\phi(t_2)\rangle## is not merely a statistic for the field value, but actually a property itself and so on for higher correlators.
  2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.
  3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.
  4. Stochastic features of QM are generated by the system interacting with the environment. Under certain assumptions (Markov, infinite limit) we can show the environment causes a transition from a system pure state to a probability distribution of system pure states, what is called "collapse" normally. Standard Born-Markov stuff, environment is essentially a reservoir in thermal equilibrium, under Markov assumption it "forgets" information about the system so information purely dissipates into the environment with transfer back to the system. System is stochastically driven into a "collapsed" state. I'm not sure if this also requires the secular approximation (i.e. system's isolated evolution ##H_S## is on a much shorter time scale than the environmental influence ##H_{ES}##, but no matter.
Thus we may characterize quantum mechanics as the physics of property-rich non-reductive highly extended nonlocal objects which are highly sensitive to their environment (i.e. the combined system-environment states are almost always metastable and "collapse" stochastically).

As we remove these features, i.e. less environmentally sensitive, more reductive and less property rich (so that certain properties become purely functions of others and properties of the whole are purely those of the parts) and more locally concentrated, we approach Classical Physics.
Point 1, is fine.

Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''. A subsystem is selected by picking a vector space of quantities (linear operators) relevant to the subsystem. Regarding a tensor product of two systems as two separate subsystems (as traditionally done) is therefore allowed only when all quantities that correlate the two systems are deemed irrelevant. Thinking in terms of the subsystems only hence produces the weird features visible in the traditional way of speaking.

Point 3 then follows.

Point 4 is valid only in a very vague sense, and I cannot repair it quickly; so please wait, or rethink it until I answer.
 
  • Like
Likes vanhees71 and DarMM
  • #102
ftr said:
So what is a wavefunction?
A vector in the image of the operator ##\rho##.
 
  • #103
Representing what?
 
  • #104
ftr said:
Representing what?
In general nothing. For a system in a pure state it represents the state, as in this case ##\rho## can be reconstructed as the multiple ##\psi\psi^*## with trace 1.
 
  • #105
A. Neumaier said:
Point 4 is valid only in a very vague sense, and I cannot repair it quickly; so please wait, or rethink it until I answer.
Thank you for the response. I'll try to rethink it, I got the B&P book mentioned in Paper III.
 
  • #107
A. Neumaier said:
The meaning is enigmatic only when viewed in terms of the traditional interpretations, which look at the same matter in a very different way.$$\def\<{\langle} \def\>{\rangle}$$
Given a physical quantity represented by a self-adjoint operator ##A## and a state ##\rho## (of rank 1 if pure),
  • all traditional interpretations give the same recipe for computing a number of possible idealized measurement values, the eigenvalues of ##A##, of which one is exactly (according to most formulations) or approximately (according to your cautious formulation) measured with probabilities computed from ##A## and ##\rho## by another recipe, Born's rule (probability form), while
  • the thermal interpretation gives a different recipe for computing a single possible idealized measurement value, the q-expectation ##\<A\>:=Tr~\rho A## of ##A##, which is approximately measured.
  • In both cases, the measurement involves an additional uncertainty related to the degree of reproducibility of the measurement, given by the standard deviation of the results of repeated measurements.
  • Tradition and the thermal interpretation agree in that this uncertainty is at least ##\sigma_A:=\sqrt{\<A^2\>-\<A\>^2}## (which leads, among others, to Heisenberg's uncertainty relation).
  • But they make very different assumptions concerning the nature of what is to be regarded as idealized measurement result.
Why do you ssay bullet 2 is different from bullet 1? I use the same trace formula of course. What else? It's the basic definition of an expectation value in QT, and it's the most general representation-free formulation of Born's rule. Also with the other 3 points, you say on the one hand the thermal interpretation uses the same mathematical formalism, but it's all differently interpreted. You even use specific probabilistic/statistical notions like "uncertainty" and define it in the usual statistical terms as the standard deviation/2nd cumulant. Why is it then not right to have the same heuristics about it in your thermal interpretation (TI) as in the minimal interpretation (MI)?
That quantities with large uncertainty are erratic in measurement is nothing special to quantum physics but very familiar form the measurement of classical noisy systems. The thermal interpretation asserts that all uncertainty is of this kind, and much of my three papers are about arguing why this is indeed consistent with the assumptions of the thermal interpretation.
But this overlooks that QT assumes that not all observables can have determined values at once. At best, i.e., if technically feasible for simple systems, you can only prepare a state such that a complete compatible set of observables takes determined values. All to this set incompatible observables (almost always) have indetermined values, and this is not due to unideal measurement devices but it's an inherent feature of the system.
Now it is experimentally undecidable what an ''idealized measurement result'' should be since measured are only actual results, not idealized ones.

What to consider as idealized version is a matter of interpretation. What one chooses determines what one ends up with!

As a result, the traditional interpretations are probabilistic from the start, while the thermal interpretation is deterministic from the start.

The thermal interpretation has two advantages:
  • It assumes at the levels of the postulates less technical mathematics (no spectral theorem, no notion of eigenvalue, no probability theory).
  • It allows to make definite statements about each single quantum system, no matter how large or small it is.
Well, using the standard interpretation it's pretty simple to state, what an idealized measurement is: Given the possible values of the observable (e.g., some angular momentum squared ##\vec{J}^2## and one component, usually ##J_z##) you perform an idealized measurement if the resolution of the measurement device is good enough to resolve the (necessarily discrete!) spectral values of the associated self-adjoint operators of this measured quantity. Of course, in the continuous spectrum you don't have ideal measurements in the real world, but also any quantum state predicts an inherent uncertainty given by the formula above. To verify this prediction you need an apparatur which resolves the measured quantity much better than this quantum-mechanical uncertainty.

You can of course argue against this very theroetical definition of "ideal measurements", because sometimes there are even more severe constraints, but these are also fundamental and not simply due to our inability to construct "ideal apparati". E.g., in relativistic QT there's a principle uncertainty for the localization of (massive) particles due to the uncertainty relation and the finiteness of the limit speed of light (Bohr and Rosenfeld, Landau). But here the physics is also clear, why it doesn't make sense to resolve the position better than this fundamental limit. It's because then rather than localizing (i.e. preparing the particle) better you produce more particles, and the same holds for measurement, i.e., the attempt of measuring the position much more precisely involves interactions with other particles leading again to the creation of more particles rather than a better localization measurement.

Of course, it is very difficult to consider in general terms all these subtle special cases.

But let's see, whether I understood the logic behind your TI now better. From #84 I understand that you start with defining the formal core just by the usual Hilbert-space formulation with a stat. op. representing the state and self-adjoint (or maybe even more general) operators representing the observables, but without the underlying physical probabilistic meaning of the MI. The measurable values of the observables are not the spectral values of the self-adjoint operators representing observables but the q-expectations abstractly defined by the generalized Born rule (the above quoted trace formula). At the first glance this makes a lot of sense. You are free to define a theory mathematically without a physical intuition behind it. The heuristics must come of course in the teaching of the subject but is not inherent to the "finalized" theory.

However, I think this interpretation of what's observable is flawed, because it intermingles the uncertainties inherent of the preparation of the system in a given quantum state with the precision of the measurement device. This is a common miconception leading to much confusion. The Heisenberg uncertainty relation (HUR) is not a description of the unavoidable perturbation of a quantum system, which can be made negligible in principle only for sufficiently large/macroscopic systems, but it's a description of the impossibility to prepare incompatible observables with a better common uncertainty than given by the HUR. E.g., having prepared a particle with a quite accurate momentum, its position has a quite large uncertainty, but nothing prevents you from measuring the position of the particle much more accurately (neglecting the above mentioned exception for relativistic measurements, i.e., arguing for non-relativistic conditions). Of course, there is indeed the influence of the measurement procedure on the measured system, but that's not described by the HUR. There's plenty of recent work on this issue (if I remember right, one of the key authors about these aspects is Busch).
 
  • #108
A. Neumaier said:
Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''. A subsystem is selected by picking a vector space of quantities (linear operators) relevant to the subsystem. Regarding a tensor product of two systems as two separate subsystems (as traditionally done) is therefore allowed only when all quantities that correlate the two systems are deemed irrelevant. Thinking in terms of the subsystems only hence produces the weird features visible in the traditional way of speaking.
Einstein has coined the very precise word "inseparability" for this particular feature of quantum entanglement. As Einstein has clarified in a German paper of 1948 he was quite unhappy about the fact that the famous EPR paper doesn't really represent his particular criticism of QT, because it's not so much the "action at a distance" aspect (which indeed only comes into the game from adding collapse postulates to the formalism as in the Heisenberg and von Neumann versions of the Copenhagen interpretation, which imho is completely unnecessary to begin with) but about the inseparability. Einstein thus preferred a stronger version of the "linked-cluster principle" than predicted by QT due to entanglement, i.e., he didn't even like the correlations due to entanglement, which however can never be used to contradict the "relativistic signal-propagation speed limit", but he insisted on the separability of the objectively observable physical world.

It's of course the very point of Bell's contribution to the issue. It's of course impossible to guess what Einstein would have argued about the fact that the modern Bell measurements show that you either have to give up locality or determinism. I think at the present state of affairs, where the predictions of QT are confirmed with an amazing significance in all these measurements, and all the successful attempts to close several loop holes, we have to live with the inseparability according to QT and relativistic local microcausal QFT. As long as there's nothing better, it's only speculative to argue about possible more comprehensive theories, let alone to be sure that such a theory exists at all!
 
  • #109
DarMM said:
Perfect, this was clear from the discussion of QFT, but I just wanted to make sure of my understanding in the NRQM case (although even this is fairly clear from 4.5 of Paper II).

So in the Thermal Interpretation we have the following core features:

1. Q-Expectations and Q-correlators are physical properties of quantum systems, not predicted averages. [...]

2. Due to the above we have a certain "lack of reduction" (there may be better ways of phrasing this), a 2-photon system is not simply "two photons" since it has non-local correlator properties neither of them possesses alone.

3. From point 2 we may infer that quantum systems are highly extended objects in many cases. What is considered two spacelike separated photons normally is in fact a highly extended object.

Viewed in this way, one sees the core similarities between Mermin's (embryonic) interpretation and Arnold's (independent, more developed) interpretation.

Mermin's "Ithaca interpretation" boils down to: "Correlations have physical reality; that which they correlate does not." A couple of the quotes at the start of his paper are also enlightening:

[W]e cannot think of any object apart from the possibility of its connection with other things.

Wittgenstein, Tractatus, 2.0121

If everything that we call “being” and “non-being” consists in the existence and non-existence of connections between elements, it makes no sense to speak of an element’s being (non-being)... Wittgenstein, Philosophical Investigations, 50.

Mermin also relates this to earlier points of view expressed by Lee Smolin and with Carlo Rovelli's "Relational QM". (References are in Mermin's paper above).

4. Stochastic features of QM are generated by the system interacting with the environment. [...]
... i.e., the decoherence from non-diagonal to diagonal state operator (hence, quantum statistics -> classical statistics), driven by interaction with the environment (e.g., random gravitational interaction with everything else). Even extremely weak such interactions can diagonalize a state operator extraordinarily quickly.
 
  • Like
Likes *now* and vanhees71
  • #110
Okay a possibly more accurate rendering of point 4.

In the Thermal Interpretation, as discussed in point 1, we have quantities ##\langle A \rangle## which take on a specific value in a specific state ##\rho##. Although ##\langle A \rangle## uses the mathematics of (generalized) statistics this is of no more significance than using a vector space in applications where the vectors are not to be understood as displacements, i.e. the physical meaning of a mathematical operation is not tied to the original context of its discovery. ##\langle A \rangle## is simply a quantity.

However it is an uncertain value. Quantum Mechanical systems are intrinsically delocalised/blurred in their quantities in the same kind of fundamental sense that "Where is a city, where is a wave?" is a fuzzy concept. I say blurred because delocalised seems appropriate to position alone. This is neither to say it has a precise position that we are uncertain of (as in a statistical treatment of Newtonian Mechanics) or a fundamentally random position (as in some view of QM). For example particles actually possesses world tubes rather than world lines.

However standard scientific practice is to treat such "blurred" uncertainties statistically, the same mathematics one uses to treat precise quantities of which one is ignorant. This similarity in the mathematics used however is what has lead to viewing quantum quantities as being intrinsically random.

For microscopic quantities their blurring is so extreme that a single observation cannot be regarded as an accurate measurement of a microscopic quantity. For example in the case of a particle when we measure position the measuring device simply becomes correlated with a point within the tube, giving a single discrete reading, but this does not give one an accurate picture of the tube.

Thus we must use the statistics of multiple measurements to construct a proper measurement of the particle's blurred position/world tube.

We are then left with the final question of:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"

Section 5.1 of Paper III discusses this. In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states. These modes correspond to macroscopically observable properties of the device, e.g. pointer reading. Since one cannot track the environment and information is lost into it via dissipation, this takes the form of the macroscopic slow modes stochastically evolving into a discrete set of states.

Thus we have our measuring devices develop an environmentally driven effectively "random" discrete reading of the truly continuous/blurred quantities of the microscopic system. We then apply standard statistical techniques to multiple such discrete measurements to reconstruct the actual continuous quantities.
 
  • Like
Likes julcab12, vanhees71 and Mentz114
  • #111
DarMM said:
Okay a possibly more accurate rendering of point 4.

In the Thermal Interpretation, as discussed in point 1, we have quantities ##\langle A \rangle## which take on a specific value in a specific state ##\rho##. Although ##\langle A \rangle## uses the mathematics of (generalized) statistics this is of no more significance than using a vector space in applications where the vectors are not to be understood as displacements, i.e. the physical meaning of a mathematical operation is not tied to the original context of its discovery. ##\langle A \rangle## is simply a quantity.

However it is an uncertain value. Quantum Mechanical systems are intrinsically delocalised/blurred in their quantities in the same kind of fundamental sense that "Where is a city, where is a wave?" is a fuzzy concept. I say blurred because delocalised seems appropriate to position alone. This is neither to say it has a precise position that we are uncertain of (as in a statistical treatment of Newtonian Mechanics) or a fundamentally random position (as in some view of QM). For example particles actually possesses world tubes rather than world lines.

However standard scientific practice is to treat such "blurred" uncertainties statistically, the same mathematics one uses to treat precise quantities of which one is ignorant. This similarity in the mathematics used however is what has lead to viewing quantum quantities as being intrinsically random.

For microscopic quantities their blurring is so extreme that a single observation cannot be regarded as an accurate measurement of a microscopic quantity. For example in the case of a particle when we measure position the measuring device simply becomes correlated with a point within the tube, giving a single discrete reading, but this does not give one an accurate picture of the tube.

Thus we must use the statistics of multiple measurements to construct a proper measurement of the particle's blurred position/world tube.

We are then left with the final question of:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"

Section 5.1 of Paper III discusses this. In essence the environment drives the "slow large scale modes" of the measuring device into one of a discrete set of states. These modes correspond to macroscopically observable properties of the device, e.g. pointer reading. Since one cannot track the environment and information is lost into it via dissipation, this takes the form of the macroscopic slow modes stochastically evolving into a discrete set of states.

Thus we have our measuring devices develop an environmentally driven effectively "random" discrete reading of the truly continuous/blurred quantities of the microscopic system. We then apply standard statistical techniques to multiple such discrete measurements to reconstruct the actual continuous quantities.
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
 
  • Like
Likes *now* and DarMM
  • #112
Correct, I forgot to include that. If I get the chance I'll write it up later today.
 
  • #113
A. Neumaier said:
What you say here is correctly understood, but the final argument is not yet conclusive. The reason why the measurement is often discrete - bi- or multistability - is not visible.
By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.
 
  • #114
So finally there are the same consistent histories? (And the Copenhagen QM can then be derived from them)
 
  • #115
AlexCaledin said:
So finally there are the same consistent histories? (And the Copenhagen QM can then be derived from them)
The whole approach is quite different from consistent histories. The measuring device's large scale features are driven into a disconnected manifold. States on each component of which represent a "pointer" outcome. This evolution is deterministic, but stochastic under lack of knowledge of the environment. You just aren't sure of the environmental state.

This is very different from consistent histories where everything is fundamentally probabilistic, but one shows the emergence of classical probability for certain macro-observables.
 
  • Like
Likes vanhees71
  • #116
After all it seems as if it's also probabilistic or statistical, but it's in the same sense statistical as is classical statistics; it's the lack of detailed knowledge about macroscopic systems which brings in the probabilistic element.

This contradicts, however, the very facts addressed by EPR or better by Einstein in his 1948 essay concerning the inseparability. The paradigmatic example is the simple polarization entangled two-photon states. Two photons can be very accurately prepared in such states via, e.g., parametric downconversion. Then you have two photons with quite well-defined momenta and strictly entangled polarization states. Just concentrating on the polarization states and measuring the two photons by detectors in appropriate (far distant) regions of space we can describe the polarization state as the pure state ##\hat{\rho}=|\Psi \rangle \langle \Psi \rangle## with
$$|\Psi \rangle=\frac{1}{\sqrt{2}} (|H,V \rangle - |V,H \rangle).$$
The single-photon states are given by the usual partial traces, leading to
$$\hat{\rho}_1=\hat{\rho}_2=\frac{1}{2} \hat{1}.$$
Here ##1## and ##2## label the places of the polarization measurement devices, i.e., a polarization filter plus a photodetector behind it.

Here we have no description of the measurement devices included, but irreducibly probabilistic states for the single-photon polarizations, i.e., the polarization of each single photon is maximally indetermined (in sense of information theory with the usual Shannon-Jaynes-von Neumann entropy as information measure). According to the minimal interpretation the polarization state is not indetermined due to lack of knowledge about the environment or the state of the measurement device but it's inherent in the state of the complete closed quantum system, i.e., the two-photon system. It's in a pure state and thus we have as complete knowledge about it as we can have, but due to the entanglement and the consequential inseparability of the two photons, the single-photon polarization states are irreducibly and maximally indetermined, i.e., we have complete knowledge about the system but still there are observables indetermined.

In this case you cannot simply redefine some expectation values as what the true observables are and then just argue that the Ehrenfest equations for these expectation values describe a deterministic process and lump the probabilistic nature of the real-world observations (i.e., just measuring unpolarized photons when looking at one of the polarization entangled two-photon states).
 
  • #117
So, according to the thermal QM, every event (including all this great discussion) was pre-programmed by the Big Bang's primordial fluctuations?
 
Last edited:
  • Like
Likes Lord Jestocost
  • #118
DarMM said:
By this do you simply mean the fact that states over all modes (the full manifold) are metastable and decay to states on the slow mode manifold, which is disconnected, under environmental action. The disconnectedness of the slow manifold providing discreteness.
Yes, since you wanted to answer the question
DarMM said:
"Why if the particle's quantities are truly continuous, for example having world tube, do our instruments record discrete outcomes?"
but the argument you then gave didn't answer it.

vanhees71 said:
t's in the same sense statistical as is classical statistics; it's the lack of detailed knowledge about macroscopic systems which brings in the probabilistic element.
Yes. The dynamics of the q-expectations of the universe is deterministic, and stochastic features appear in exactly the same way as in Laplace's deterministic classical mechanics of the universe.

vanhees71 said:
This contradicts, however, the very facts addressed by EPR or better by Einstein in his 1948 essay concerning the inseparability.
No, because my notion of indecomposability (post #101) together with extended causality (Section 4.4 of Part II) allows more than Einstein's requirement of separability. But it still ensures locality in the quantum field sense and Poincare invariance and hence correctly addresses relativity issues.

vanhees71 said:
The paradigmatic example is the simple polarization entangled two-photon states.
Two-photon states are discussed in some detail in Section 4.5 of Part II.

vanhees71 said:
Why the heck, hasn't he written this down in a 20-30p physics paper rather than with so much text obviously addressed to philosophers? (It's not meant in as bad a way as it may sound ;-))).
Maybe you can understand now why. There is a lot to be said to make sure that one doesn't fall back into the traditional interpretations and can see how the old issues are settled in a new way. In the past, different people asked different questions and had different problems with the thermal interpretation as I had discussed them earlier in a more informal way. Orginally I wanted to write a 20 page paper but it grew and grew into the present 4 part series.

vanhees71 said:
the polarization of each single photon is maximally indetermined (in sense of information theory with the usual Shannon-Jaynes-von Neumann entropy as information measure).
This is again reasoning from the statistical interpretation, which doesn't apply to the thermal interpretation.

In the thermal interpretation, the single photons in your 2-photon state are unpolarized. This is simply a different (but completely determined) state than one of the polarized states. There is no experimental way to get more information about an unpolarized photon. Thus it is a state of maximal information; the Shannon entropy (a purely classical concept) is here irrelevant.

The fact that unpolarized light can be turned into polarized light by a filter is not against this since the light coming out of the filter is different from that going into the filter. One can also turn a neutron into a proton plus an electron just by isolating it and waiting long enough, but this doesn't prove that the neutron is composed of a proton and an electron!

vanhees71 said:
According to the minimal interpretation [...] we have complete knowledge about the system but still there are observables indetermined.
Whereas according to the thermal interpretation, if we have complete knowledge about the system we know all its idealized measurement results, i.e., all q-expectations values, which is equivalent with knowing its density operator.

If you want to assess the thermal interpretation you need to discuss it in terms of its own interpretation and not in terms of the statistical interpretation!
 
  • Like
Likes dextercioby
  • #119
AlexCaledin said:
So, according to the thermal QM, every event (including all this great discussion) was pre-programmed by the Big Bang's primordial fluctuations?
Did Laplace have the same complaint with his clockwork universe?

Yeah, God must be an excellent programmer who knows how to create an interesting universe! But maybe consciousness is not preprogrammed and allows for some user decisions (cf. my fantasy ''How to Create a Universe'' - written 20 years ago when the thermal interpretation was still an actively pursued dream rather than a reality)? We don't know yet...

In any case, the deterministic universe we live in is much more interesting than Conway's game of life which already creates quite interesting toy universes in a deterministic way by specifying initial conditions and deterministic rules for the dynamics.
 
Last edited:
  • Like
Likes eloheim and AlexCaledin
  • #120
A. Neumaier said:
Yes, since you wanted to answer the question

but the argument you then gave didn't answer it.
I wasn't sure if it also required an argument that slow mode manifold was in fact disconnected and I couldn't think of one. Metastability of states on the full manifold decaying into those on the slow manifold is enough provided the slow mode manifold is disconnected. Is there a reason to expect this in general?

A. Neumaier said:
Point 2 is a bit misleading with ''reduction'', but becomes correct when phrased in terms of a ''lack of complete decomposability''.
This somewhat confuses me. I would have thought the notion of reducibility just means it can be completely decomposed, i.e. the total system is simply a composition of the two subsystems and nothing more. What subtlety am I missing?
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 154 ·
6
Replies
154
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 42 ·
2
Replies
42
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
48
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 53 ·
2
Replies
53
Views
7K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K