Does a measurement setup determine the reality of spin measurement outcomes?

In summary, the concept of spin in the Copenhagen interpretation is not considered to be real before a measurement is performed. In Bohmian mechanics, spin is determined before the measurement by the wave function, which is considered to be ontologically real. However, in this interpretation, spin does not exist as a separate entity, only particle positions do. The measurement of spin in Bohmian mechanics is simply the measurement of whether the particle ends in the upper or lower detector in a Stern-Gerlach apparatus. In some interpretations, such as the thermal interpretation, spin is considered to be a real number that is only discretized by measurement. In the Copenhagen interpretation, spin is not considered to be real until measured, while in Bohmian mechanics it
  • #141
DarMM said:
Just extends it. Peres's point is that measurements in general cannot be considered as measurements of quantities known from the classical theory.

He gives the example of a POVM measurement on the spin degree of freedom of a particle that can't be understood as measuring ##\textbf{J}\cdot \textbf{n}## for any direction ##\textbf{n}##.
Indeed, I checked his textbook again. There's nothing at all different in the foundations than in any other textbook. It's also a clear statement for the orthodox (minimal) probabilistic interpretation.
 
Physics news on Phys.org
  • #142
vanhees71 said:
Indeed, I checked his textbook again. There's nothing at all different in the foundations than in any other textbook. It's also a clear statement for the orthodox (minimal) probabilistic interpretation.
So in what sense it is called spin. Moreover, isn't true that the electron of Dirac equation has no charge or spin unless it interacts with an EM. why is that?
 
  • #143
If you start to discuss about Bohr and von Neumann and about the status of 1927, there's no chance to get to anything. The whole trouble with "interpretation" has its origin in this enigmatic mumblings of Bohr and particularly Heisenberg. It's completely irrelevant nearly 100 years later. QT is successfully used to describe everything hitherto observed and where it is applicable, and that's a lot too.

In standard texts the Shannon entropy is a measure for missing information given a probability distribution relative to complete information. In QT it's the same as von Neumann's definition,
$$S=-k_{\text{B}} \mathrm{Tr}(\hat{\rho} \ln \hat{\rho}).$$
This is consistent with the fact that in QT a pure state refers to complete possible information about the system. Indeed, then ##\hat{\rho}=|\psi \rangle \langle \psi|## with some normalized ket ##|\psi \rangle##, and then ##S=0##, i.e., there's no "missing information".

In his textbook Peres does not follow the usual definition, which seems to lead to a confusion in some postings above. If I interpret pages 280 and 281 right, it's the following situation (I make up a simple concrete example to make the very dense remarks of Peres more concrete).

Suppose we have Alice preparing pure spin states of a spin-1/2-particle. Let's denote with ##|\sigma_{\vec{n}} \rangle## the eigenvectors of the spin component ##\hat{s}_{\vec{n}}=\vec{n} \cdot \hat{\vec{s}}##, where ##\vec{n} in \mathbb{R}^3## is an arbitrary unit vector. Of course, the eigenvalues are ##\sigma_{\vec{n}} \in \{1/2,-1/2 \}## (in the following, using natural units with ##\hbar=k_{\text{B}}=1##).

To make a concrete example take three arbitrary unit vectors ##\vec{n}_j## and suppose Alice prepares randomly always particles in the pure states ##\hat{P}_j=|\sigma \vec{n}_j =+1/2 \rangle \langle \sigma_{\vec{n}_j=+1/2}|## with propbalities ##p_j##. Don't ask me why, but Peres defines the "Shannon entropy" for this case as
$$S_{\text{Peres/Shannon}} \equiv H=-\sum_{j=1}^3 p_j \ln p_j.$$
The corresponding quantum state is, however, given by
$$\hat{\rho}=\sum_{j=1}^3 p_j \hat{P}_j,$$
and the usual entropy a la von Neumann (and Jaynes) is
$$S=-\mathrm{Tr} \hat{\rho} \ln \hat{\rho}.$$
This is of course usually different from ##H##, and indeed the entropies refer to different probablities. ##H## describes the entropy as the lack of information for an observer who knows that Alice prepares with probabilities ##p_j## the states ##\hat{P}_j##. In contradistinction to that ##S## answers the question, what's the missing information given ##\hat{\rho}## relative to complete possible information. Complete possible information for a spin would be to have prepared an arbitrary spin component ##\hat{s}_{\vec{n}}## in an arbitrary direction ##n##. The probabilities for the possible outcomes are of course
$$p_{\vec{n}}(\sigma_{\vec{n}})=\langle \sigma_{\vec{n}} |\hat{\rho} |\sigma_{\vec{n}} \rangle=\sum_{j=1}^3 p_j \frac{1}{2} (1+2 \sigma_{\vec{n}} \vec{n} \cdot \vec{n}_j)$$
and
$$S=-\sum_{\sigma_{\vec{n}} =\pm 1/2} p_{\vec{n}}(\sigma_{\vec{n}}) \ln (p_{\vec{n}}(\sigma_{\vec{n}}))=-\mathrm{Tr} \hat{\rho} \ln \hat{\rho}.$$
It's of course nothing wrong with Peres's definitions, but it's a bit confusing to deviate from standard terminology. Usually one defines the Shannon-Jaynes entropy of QT as the von Neumann entropy.

Peres uses the Shannon entropy to a different situation and calls this then Shannon entropy. As usual, in any textbook one must read the definitions carefully since the author may deviate from standard terminology or definitions in other textbooks or papers.
 
  • #144
A. Neumaier said:
No. In his book, he is careful to talk only about quantum tests (for which Born's rule is on safe grounds) rather than measuring observables (for which Born's rule is questionable). On p.14 he says,

And he acknowledges (on p.11) that the statistical interpretation ...

... which he discusses in Chapter 12 without resolving them.

In the formal Chapters 2 and 3 he uses over 50 pages to define his general setup in a careful way.
On p.63 he reemphasizes (on the formal level)

and mentions that many kinds of measurements have nothing to do with (the caricature notion called) measurement in Born's rule.
The standard formalism is an idealization. To reduce POVM to this idealization one needs to introduce a bigger, unphysical Hilbert space encoding a suitable ancilla (p.282+288), and pretend (via Neumark's theorem, p.285f) that the idealization holds for an artificially constructed dynamics in this extended Hilbert space.

In the preface of his book, Peres compares (p.xiii) his approach with that of von Neumann, which can be taken to represent the standard view:
In which sense isn't the von Neumann measurement (projectors) a special case of the POVM formalism. Maybe, I have to read the chapters in Peres's book I skipped over because I found them overly complicated and sometimes confusing, e.g., calling something "Shannon entropy" which is different from the usual use in other QT textbooks (see my previous posting). Of course, I guess within his book he is consistent.

Maybe I also misunderstood, what he means by "quantum test". I understood it such that he means what's usually called "measurement" with the extended unerstanding of the word using POVMs instead of ideal filter measurements.

He has also a formulation sometimes, which I carefully abandon for some time from my language: He says that given a preparation in a state ##|\psi_1 \rangle##, the probability for the system of being in state ##|\psi_2 \rangle## is ##|\langle \psi_2|\psi_1 \rangle|^2##. This can lead to confusion in connection with the dynamics, and I consider it important to refer to measurements of observables (in the standard sense), i.e., the probabilities are about outcomes of eigenvalues when measuring observables accurately, i.e., in Born's rule one vector in the scalar product must be an eigenvector of an observable operator and the other a state vector, which in general time evolve differently (e.g., in the Schrödinger picture the state vectors propagate with the complete Hamiltonian and the eigenvectors of observables are constant in time).

If it comes to the details, I find Peres not always satisfactory, but concerning interpretation he seems to be on the no-nonsense (i.e., the probabilistic standard interpretation) side ;-)).
 
  • #145
ftr said:
So in what sense it is called spin. Moreover, isn't true that the electron of Dirac equation has no charge or spin unless it interacts with an EM. why is that?
I don't know. Where did you get this from?

The free Dirac field is a quantum field describing particles and antiparticles with spin 1/2. It's an irreducible representation of the orthochronous Lorentz group (but not the proper orthochronous Lorentz group, for which it is an reducible representation). As all the "physical representations" it is characterized by several quantum numbers, which are the mass and the spin. Invariance of the corresponding Lagrangian under phase transformations leads to a conserved charge. The particles and anti-particles have the same mass and spin and opposite charges.

Now you can gauge the global symmetry with its associated conserved charge and interpret the Abelian gauge field as the electromagnetic field. Then you have a model for charges and the electromagnetic field and then the conserved charge of this gauged symmetry you call the "electric charge".
 
  • #146
vanhees71 said:
Indeed, I checked his textbook again. There's nothing at all different in the foundations than in any other textbook. It's also a clear statement for the orthodox (minimal) probabilistic interpretation.
I'm not sure which part of his textbook you looked at, but see this paper:
https://arxiv.org/abs/quant-ph/0207020
 
  • #147
vanhees71 said:
In his textbook Peres does not follow the usual definition, which seems to lead to a confusion in some postings above
Yeah, as I mentioned above he uses "Shannon Entropy" in a non-standard way as the entropy of distribution over density matrices.
 
  • #148
vanhees71 said:
In which sense isn't the von Neumann measurement (projectors) a special case of the POVM formalism.
In no sense. But it is a highly idealized one, and one cannot derive from it the general one. The 'derivation' given proceeds by embedding the physical Hilbert space into a fictitious tensor product with an ancilla space, and has only the status of a consistency check.
Asher Peres (p.288) said:
In real life, POVMs are not necessarily implemented by the algorithm of Eq. (9.98). There is an infinity of other ways of materializing a given POVM. The importance of Neumark’s theorem lies in the proof that any arbitrary POVM with a finite number of elements can in principle, without violating the rules of quantum theory, be converted into a maximal test, by introducing an auxiliary, independently prepared quantum system (the ancilla).
vanhees71 said:
Maybe I also misunderstood, what he means by "quantum test". I understood it such that he means what's usually called "measurement" with the extended understanding of the word using POVMs instead of ideal filter measurements.
No. A quantum test is a test for ''being in the state ##\phi##,'' corresponding to a von Neuman measurement of a projector to the 1-dimensional space spanned by ##\phi##. His axiomatization (in Chapters 2 and 3) only concerns these quantum tests, which are indeed the measurements for which Born's rule (saying here that a positive test is achieved with probability ##\phi^*\rho\phi##, postulated on p.56) is impeccable.

Peres has no need for claiming having measured eigenvalues of a given operator representing a given observable. Instead he derives this under certain idealizations (not present in a general POVM) by constructing the operator on p.63 given a collection of quantum tests forming a ''maximal test'', for which he postulates realizability on p.54. The maximal test tests for each state in a given complete orthonormal basis. Since only finitely many quantum tests can be realized, this amounts to an assumption of a finite-dimensional Hilbert space, showing the idealization involved in his derivation.

Later he forgets that his definition of an observable is constructed from a given maximal test, hence has no meaning in any other basis: Carried away by the power of the formal calculus (and since he needs it to make contact with tradition), he asks on p.64 for ''the transformation law of the components of these observable matrices, when we refer them to another basis''. So I don't find his point of view convincing.
vanhees71 said:
He has also a formulation sometimes, which I carefully abandon for some time from my language: He says that given a preparation in a state ##|\psi_1 \rangle##, the probability for the system of being in state ##|\psi_2 \rangle## is ##|\langle \psi_2|\psi_1 \rangle|^2##. This can lead to confusion in connection with the dynamics, and I consider it important to refer to measurements of observables (in the standard sense), i.e., the probabilities are about outcomes of eigenvalues when measuring observables accurately
At least Peres is more careful and consistent than you.

You are using Born's rule claiming, in (2.1.3) in your lecture notes, that measured are exact eigenvalues - although these are never measured exactly -, to derive on p.21 the standard formula for the q-expectation (what you there call the mean value) of known observables (e.g., the mean energy ##\langle H\rangle## in equilibrium statistical mechanics) with unknown (most likely irrational) spectra. But you claim that the resulting q-expectation is not a theoretical construct but is ''in agreement with the fundamental definition of the expectation value
of a stochastic variable in dependence of the given probabilities for the outcome of a measurement of this variable.'' This would hold only if your outcomes match the eigenvalues exactly - ''accurately'' is not enough.
 
Last edited:
  • #149
DarMM said:
Yeah, as I mentioned above he uses "Shannon Entropy" in a non-standard way as the entropy of distribution over density matrices.
But your usage makes the value of the Shannon entropy dependent on a context (the choice of an orthonormal basis), hence is also not the same as the one vanhees71 would like to use:
vanhees71 said:
Usually one defines the Shannon-Jaynes entropy of QT as the von Neumann entropy.
Thus we now have three different definition, and it is far from clear which one is standard.

On the other hand, why should one give two different names to the same concept?
 
  • #150
A. Neumaier said:
But your usage makes the value of the Shannon entropy dependent on a context
A. Neumaier said:
Thus we now have three different definition, and it is far from clear which one is standard.
Yes, in Quantum Information theory the Shannon entropy is the entropy of the classical model induced by a context. So it naturally depends on the context. I don't see why this is a problem, it's a property of a context. There are many information theoretic properties that are context dependent in Quantum Information.

Von Neumann entropy is a separate quantity and is a property of the state, sometimes called Quantum Entropy and is equal to the minimum Shannon entropy taken over all contexts.

I don't see that what @vanhees71 and I are saying is that different. He's just saying that the von Neumann entropy is the quantum generalization of Shannon entropy. That's correct. Shannon entropy is generalized to the von Neumann entropy, but classical Shannon entropy remains as the entropy of a context.

It's only Peres's use, referring to the entropy of the distribution over densities, that seems nonstandard to me.
 
  • Like
Likes vanhees71
  • #151
ftr said:
Moreover, isn't true that the electron of Dirac equation has no charge or spin unless it interacts with an EM. why is that?
Do you have a source for that? In either case, in Bohmian mechanics there is a clear answer: spin is not a property of the particle but of the guiding wave, which is a spinor; the guiding wave imparts spin onto the particle.

In SED (a semi-Bohmian semiclassical competitor to QED) particles are fundamentally spinless as well; spin is imparted onto particles by the circularly polarized modes of the ground state of the background field via the Lorentz force.
 
  • #152
DarMM said:
I'm not sure which part of his textbook you looked at, but see this paper:
https://arxiv.org/abs/quant-ph/0207020
Yes, this is a generalization of idealized measurements to imprecise measurements of real-world detectors, formalized in terms of the POVM formalism. Idealized measurements, i.e., precise measurements are a special case, where the ##\hat{E}_m## are projectors ##|m \rangle \langle m|## with ##m## labelling a complete orthonormal set of eigenvectors of the measured observable. I've no clue, how else to interpret the example used by Peres, if the ##\hat{J}_k## are not the self-adjoint (I don't think hermitian is sufficient though Peres claimes this) operators representing spin.
 
  • #153
vanhees71 said:
I've no clue, how else to interpret the example used by Peres, if the ##\hat{J}_k## are not the self-adjoint (I don't think hermitian is sufficient though Peres claimes this) operators representing spin.
He's not saying ##\hat{J}_k## don't represent spin. He's saying that a typical POVM cannot be understood as measurement of ##J\cdot n## for some direction ##n##, i.e. a typical POVM cannot be associated with spin in a given direction or in fact with any classical quantity. It seems to be simply an abstract representation of the responses of a given device.
 
  • Like
Likes vanhees71
  • #154
DarMM said:
Yes, in Quantum Information theory the Shannon entropy is the entropy of the classical model induced by a context. So it naturally depends on the context. I don't see why this is a problem, it's a property of a context. There are many information theoretic properties that are context dependent in Quantum Information.

Von Neumann entropy is a separate quantity and is a property of the state, sometimes called Quantum Entropy and is equal to the minimum Shannon entropy taken over all contexts.

I don't see that what @vanhees71 and I are saying is that different. He's just saying that the von Neumann entropy is the quantum generalization of Shannon entropy. That's correct. Shannon entropy is generalized to the von Neumann entropy, but classical Shannon entropy remains as the entropy of a context.

It's only Peres's use, referring to the entropy of the distribution over densities, that seems nonstandard to me.
Of course, the entropy measure depends on the context. That it's strength! It's completely legitimate to define an entropy ##H## and also obviously useful in some investigations is quantum informatics as Peres. To avoid confusion, I'd not call it Shannon entropy.

Let me try again, to make the definition clear (hoping to have understood Peres right).

Peres describes the classical gedanken experiment to introduce mixed states and thus the general notion of quantum state in terms of a statistical operator (which imho should be a self-adjoint positive semidefinite operator with trace 1): Alice (A) prepares particles in pure states ##\hat{P}_n=|u_n \rangle \langle u_n|##, each with probability ##p_n##. The ##|u_n \rangle## are normalized but not necessarily orthogonal to each other. The statistical operator associated with this situation is
$$\hat{\rho}=\sum_n p_n \hat{P}_n.$$
Now Peres defines an entropy by
$$H=-\sum_n p_n \ln p_n.$$
This can be analyzed using the general scheme by Shannon. Entropy in Shannon's sense is a measure for the missing information given a probability distribution relative to what's considered complete information.

Obviously Peres takes the ##p_n## as the probability distribution. This distribution describes precisely the situation of A's preparation process: It describes the probability that A prepares state ##\hat{P}_n##, i.e., an observer Bob (B) uses ##H## as the entropy measure if he knows that A prepares the specific states ##\hat{P}_n##, each with probability ##p_n##. Now A sends him such a state. For B complete information would be to know which ##\hat{P}_n## this is, but he doesn't know it but only the probability ##p_n##. That's why B uses ##H## as the measure for missing information.

Now the mixed state ##\hat{\rho}## defined above describes something different. It provides the probability distribution for any possible measurement on the system. Complete information in QT means that we measure precisely (in the old von Neumann sense) a complete set of compatible observables ##O_k##, represented by self-adjoint operators ##\hat{O}_k## with orthonormalized eigenvectors ##|\{o_k \} \rangle##. If we are even able to filter the systems according to this measurement we have prepared the system as completely as one can according to QT, namely in the pure state ##\hat{\rho}(\{o_k \})=|\{o_k \} \rangle \langle \{o_k \}|##.

The probabilities for the outcome of such a complete measurement are
$$p(\{o_k \})=\langle \{o_k \} |\hat{\rho}|\{o_k \} \rangle.$$
Relative to this definition of "complete knowledge", given A's state preparation described by ##\hat{\rho}## B associates with this situation the entropy
$$S=-\sum_{\{o_k \}} p(\{o_k \}) \ln p(\{o_k \}).$$
Now it is clear that this entropy is independent of which complete set of compatible observables B chooses to define what's complete knowledge in this quantum-theoretical sense means, since obviously this entropy is given by
$$S=-\mathrm{Tr} (\hat{\rho} \ln \hat{\rho}).$$
This is the usual definition of the Shannon-Jaynes entropy in quantum theory, and it's identical with von Neumann's definition by this trace. There's no contradiction between ##H## and ##S## of any kind, it are just entropies in Shannon's information theoretical sense referring to different information about the same preparation procedure.

One has to keep in mind, to which "sense of knowledge" the entropy refers to, and no confusion can occur. As I said before, I'd not call H the Shannon entropy to avoid confusion, but it's fine as a short name for what Peres clearly defines.
 
  • Like
Likes DarMM
  • #155
vanhees71 said:
(I don't think hermitian is sufficient though Peres claimes this) operators representing spin.
Spin operators are Hermitian and bounded. This implies already that they are selfadjoint.
 
  • Like
Likes dextercioby
  • #156
A. Neumaier said:
You are using Born's rule claiming, in (2.1.3) in your lecture notes, that measured are exact eigenvalues - although these are never measured exactly -, to derive on p.21 the standard formula for the q-expectation (what you there call the mean value) of known observables (e.g., the mean energy ##\langle H\rangle## in equilibrium statistical mechanics) with unknown (most likely irrational) spectra. But you claim that the resulting q-expectation is not a theoretical construct but is ''in agreement with the fundamental definition of the expectation value
of a stochastic variable in dependence of the given probabilities for the outcome of a measurement of this variable.'' This would hold only if your outcomes match the eigenvalues exactly - ''accurately'' is not enough.
We have discussed this a zillion of times. This is the standard treatment in introductory text, and rightfully so, because you have to first define the idealized case of precise measurements. Then you can generalize it to more realistic descriptions of imprecise measurements.

Peres is a bit contradictory when claiming everything is defined by defining some POVM. There are no POVMs in the lab but only real-world preparation and measurement devices.

If, as you claim, precise measurements were not what Peres calls a "quantum test", which sense then would this projection procedure make? Still, I don't think that it's good language to mix the kets representing pure states with eigenstates of the observable operators. At latest at the point if you bring in dynamics using different pictures of time evolution, this leads to confusion. I understood this only quite a while after having learned QT for the first time by reading the book by Fick, which is among the best books on QT I know. The only point which I think is wrong is to envoke the collapse postulate. Obviously one can not have a QT textbook that gets it completely right :-((.
 
  • #157
A. Neumaier said:
But your usage makes the value of the Shannon entropy dependent on a context (the choice of an orthonormal basis), hence is also not the same as the one vanhees71 would like to use:

Thus we now have three different definition, and it is far from clear which one is standard.

On the other hand, why should one give two different names to the same concept?
No. As I tried to explain in #154 there's only one definition of Shannon entropy, which is very general but a very clear concept. It's on purpose context dependent, i.e., that's not a bug but a feature of the whole concept.

The only confusion arises due to the unconventional use of the word Shannon entropy in the context of the probabilities described by quantum states.
 
  • #158
vanhees71 said:
No. As I tried to explain in #154 there's only one definition of Shannon entropy, which is very general but a very clear concept. It's on purpose context dependent, i.e., that's not a bug but a feature of the whole concept.

The only confusion arises due to the unconventional use of the word Shannon entropy in the context of the probabilities described by quantum states.
Shannon entropy is a classical concept that applies whenever one has discrete probabilities. Peres, DarMM, and you appply it consistently to three different situations, hence are all fully entitled to call it Shannon entropy. You cannot hijack the name for your case alone.
 
Last edited:
  • #159
Sigh. Shannon entropy is not restricted to classical physics. It's applicable for any situation described with probabilities. I also do not see what's wrong with the usual extension of the Shannon entropy to continuous situations. After all entropy in the context of statistical physics has been defined using continuous phase-space variables.
 
  • #160
vanhees71 said:
Sigh. Shannon entropy is not restricted to classical physics. It's applicable for any situation described with probabilities.
This is just what I had asserted. Instead of sighing it might be better to pay attention to what was actually said: You, DarMM and Peres consider three different kinds of quantum situations described with probabilities and hence get three different Shannon entropies. They all fully deserve this name.
vanhees71 said:
I also do not see what's wrong with the usual extension of the Shannon entropy to continuous situations.
The Shannon entropy of a source is defined as the minimal expected number of questions that need to be asked to pin down the classical state of the source (i.e., the exact knowledge of what was transmitted), given the probability distribution for the possibilities. It applies by its nature only to discrete probabilities, since for continuous events no finite amount of questions pins down the state exactly.
vanhees71 said:
After all entropy in the context of statistical physics has been defined using continuous phase-space variables.
In the statistical physics of equilibrium, the Hamiltonian must have a discrete spectrum; otherwise the canonical density operator is not defined: Indeed, if the spectrum is not discrete, ##e^{-\beta H}## is not trace trace class, and the partition function diverges.

On the other hand, Boltzmann's H is, by the above argument, not a Shannon entropy.
 
  • #161
PeterDonis said:
So basically, the criterion being given is that for something to be "real" it must be capable of being acted on by other "real" things, whereas the wave function is not acted on by anything. But this doesn't seem right, because the wave function is determined by Schrodinger's Equation, which includes the potential energy, and the potential energy is a function of the particle configuration. So I don't think it's correct to say that the wave function is not acted on by anything.
This is a misunderstanding, but a very intriguing one, namely a category error. The wave function is a solution to the Schrödinger equation, which is specifically determined by positions (among other things which is not relevant for the rest of the argument).

Being a solution to an equation is definitely not the same kind of logical or mathematical relationship as e.g. the relationship between two dynamical objects such as two masses mutually acting upon each other; the former is a relationship between input and output, while in the latter the relationship is between two inputs who together determine an output.
 
  • #162
Auto-Didact said:
This is a misunderstanding, but a very intriguing one, namely a category error.

In the context of, say, the Copenhagen interpretation of QM, it is, yes. But not in the context of Bohmian mechanics, which was the interpretation under discussion in the post of mine you quoted and the subthread it is part of. In Bohmian mechanics the wave function is a real thing; the Schrodinger Equation is simply an equation that governs the dynamics of this real thing.
 
  • #163
PeterDonis said:
In Bohmian mechanics the wave function is a real thing

Hmm, I'm not sure that's necessarily the case. BM can frame the wavefunction as nomological rather than ontological. I.e. Instead of being a thing that exists, it is a representation of the behaviour of things that exist.

From "Quantum Physics Without Quantum Philosophy" by Goldstein et al

"It should be clear by now what, from a universal viewpoint, the answer to these objections must be: the wave function of the universe should be regarded as a representation, not of substantial physical reality, but of physical law."

"As such, the wave function plays a role analogous to that of the Hamiltonian function H = H (Q, P ) ≡ H (ξ ) in classical mechanics [...] And few would be tempted to regard the Hamiltonian function H as a real physical field, or expect any back-action of particle configurations on this Hamiltonian function."
 
  • Like
Likes Demystifier and Auto-Didact
  • #164
PeterDonis said:
In the context of, say, the Copenhagen interpretation of QM, it is, yes. But not in the context of Bohmian mechanics, which was the interpretation under discussion in the post of mine you quoted and the subthread it is part of. In Bohmian mechanics the wave function is a real thing; the Schrodinger Equation is simply an equation that governs the dynamics of this real thing.
Actually, that isn't exactly true: in BM the wavefunction isn't a field in physical space(time), but a vector field in configuration space (NB: the fact that the wavefunction is living in configuration space while acting upon particles in space(time) is also why BM is an explicitly nonlocal theory).

This is quite similar to the Hamiltonian vector field which exists in phase space; the difference is that the Hamiltonian vector field is static, while the wavefunction as a solution to the SE is dynamic; I suspect however that this difference might be a red herring, because we actually know of a static wavefunction, namely the solution to the Wheeler-de Witt equation.
 
  • #165
Auto-Didact said:
in BM the wavefunction isn't a field in physical space(time), but a vector field in configuration space

That's true, but it doesn't change what I said.
 
  • #166
Morbert said:
BM can frame the wavefunction as nomological rather than ontological.

The reference you give, as far as I can tell, isn't talking about BM.
 
  • #167
PeterDonis said:
The reference you give, as far as I can tell, isn't talking about BM.

Bohmian Mechanics is the main subject of the book. The quotes are from chapter 11.5: "A Universal Bohmian Theory". Specifically, the wavefunction is described as nomological in response to the objection that the wavefunction in BM doesn't experience any back-action from the existing configuration.
 
  • #168
Morbert said:
the wavefunction is described as nomological in response to the objection that the wavefunction in BM doesn't experience any back-action from the existing configuration

I don't have the book, but the position you describe seems similar to that described in the paper @DarMM linked to in post #50. We discussed that earlier in the thread.
 
  • #169
PeterDonis said:
I don't have the book, but the position you describe seems similar to that described in the paper @DarMM linked to in post #50. We discussed that earlier in the thread.
Ah ok, I missed the earlier context of the discussion.
 
  • #170
A. Neumaier said:
This is just what I had asserted. Instead of sighing it might be better to pay attention to what was actually said: You, DarMM and Peres consider three different kinds of quantum situations described with probabilities and hence get three different Shannon entropies. They all fully deserve this name.

The Shannon entropy of a source is defined as the minimal expected number of questions that need to be asked to pin down the classical state of the source (i.e., the exact knowledge of what was transmitted), given the probability distribution for the possibilities. It applies by its nature only to discrete probabilities, since for continuous events no finite amount of questions pins down the state exactly.

In the statistical physics of equilibrium, the Hamiltonian must have a discrete spectrum; otherwise the canonical density operator is not defined: Indeed, if the spectrum is not discrete, ##e^{-\beta H}## is not trace trace class, and the partition function diverges.

On the other hand, Boltzmann's H is, by the above argument, not a Shannon entropy.
So you are saying that the classical examples for the application of equilibrium statistics is flawed, i.e., no ideal gases, no Planck black-body radiation, specific heat of solids, and all that? How can it then be that it works so well in physics? That you have to take the "thermodynamic limit" very carefully is clear.
 
  • #171
vanhees71 said:
So you are saying that the classical examples for the application of equilibrium statistics is flawed, i.e., no ideal gases, no Planck black-body radiation, specific heat of solids, and all that? How can it then be that it works so well in physics? That you have to take the "thermodynamic limit" very carefully is clear.
The derivation of thermodynamics from statistical mechanics is sound. I give such a derivation in Part II of my online book in terms of the grand canonical density operator, independent of any interpretation in terms of probabilities and of any thermodynamical limit.

I am only claiming that the thermodynamic concept of entropy in general (e.g., in Boltzmann's H-theorem) has nothing to do with Shannon entropy. It is not amenable to an information theoretic analysis. The latter is limited to discrete probabilities.
 
  • #172
A. Neumaier said:
The latter is limited to discrete probabilities
Do you mean a discrete sample space?
 
  • #173
A. Neumaier said:
The derivation of thermodynamics from statistical mechanics is sound. I give such a derivation in Part II of my online book in terms of the grand canonical density operator, independent of any interpretation in terms of probabilities and of any thermodynamical limit.

I am only claiming that the thermodynamic concept of entropy in general (e.g., in Boltzmann's H-theorem) has nothing to do with Shannon entropy. It is not amenable to an information theoretic analysis. The latter is limited to discrete probabilities.
Well, there are two camps in the physics community: The one camp likes the information theoretical approach to statistical physics, the other hates it. I belong to the first camp, because for me the information theoretical approach provides the best understanding what entropy is from a microscopic point of view.

What I mean by "thermodynamical limit" is the limit to take the volume to infinity, keeping densities constant. This is the non-trivial limit you seem to refer to when saying you need descrete probability distributions. Indeed at finite volume with the appropriate (periodic in the case you want proper momentum representations) spatial boundary conditions, momentum gets discrete and you can so all calculations in a (pretty) well-defined way.
 
  • #174
vanhees71 said:
Well, there are two camps in the physics community: The one camp likes the information theoretical approach to statistical physics, the other hates it.
My arguments are of a logical nature. They do not depend on emotional feelings associated with an approach. The information theoretical approach is limited by the nature of the questions it asks.
vanhees71 said:
What I mean by "thermodynamical limit" is the limit to take the volume to infinity, keeping densities constant.
In this thermodynamical limit, all statistical uncertainties reduce to zero, and no trace of the statistics remains.

In particular, it is not the limit in which a discrete probabiliy distribution becomes a continuous one. Thus it does not justify to apply information theoretical reasoning to continuos probability distributions.

Neither is Boltzmann's H-theorem phrased in terms of a thermodynamic limit. No information theory is needed to motivate and understand his results. Indeed they were obtained many dozens year before Jynes introduced the thermodynamic interpretation
 
Last edited:
  • Like
Likes dextercioby
  • #175
DarMM said:
Do you mean a discrete sample space?
I mean a discrete measure on the sigma algebra with respect to which the random variables associated with the probabilities are defined. This is needed to make sense of the Shannon entropy as a measure of lack of information. For example, Wikipedia says,
Wikipedia said:
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable [...] is usually referred to as the continuous entropy, or differential entropy. A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann. [...] Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative. [...] The differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset. [...] It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information.
vanhees71 said:
for me the information theoretical approach provides the best understanding what entropy is from a microscopic point of view.
What should a negative amount of missing information mean? Which improved understanding does it provide?
 
Last edited:
<h2>1. What is a measurement setup in the context of spin measurement outcomes?</h2><p>A measurement setup refers to the experimental apparatus used to measure the spin of a particle, such as a Stern-Gerlach apparatus or a magnetic resonance imaging (MRI) machine.</p><h2>2. Can the measurement setup influence the outcome of a spin measurement?</h2><p>Yes, the measurement setup can influence the outcome of a spin measurement. Factors such as the strength and orientation of magnetic fields, as well as the sensitivity and precision of the measurement instrument, can affect the results.</p><h2>3. Does the measurement setup determine the reality of spin measurement outcomes?</h2><p>No, the measurement setup does not determine the reality of spin measurement outcomes. The outcome of a spin measurement is determined by the intrinsic properties of the particle being measured, and the measurement setup simply allows us to observe and quantify these properties.</p><h2>4. How do scientists account for the influence of the measurement setup on spin measurement outcomes?</h2><p>Scientists take great care in designing and calibrating their measurement setups to minimize any potential influence on spin measurement outcomes. They also conduct multiple measurements and analyze the data statistically to account for any experimental errors.</p><h2>5. Are there any limitations to the accuracy of spin measurements due to the measurement setup?</h2><p>Yes, there are limitations to the accuracy of spin measurements due to the measurement setup. These limitations can be minimized through advancements in technology and careful experimental design, but they cannot be completely eliminated due to the inherent uncertainty and complexity of quantum systems.</p>

1. What is a measurement setup in the context of spin measurement outcomes?

A measurement setup refers to the experimental apparatus used to measure the spin of a particle, such as a Stern-Gerlach apparatus or a magnetic resonance imaging (MRI) machine.

2. Can the measurement setup influence the outcome of a spin measurement?

Yes, the measurement setup can influence the outcome of a spin measurement. Factors such as the strength and orientation of magnetic fields, as well as the sensitivity and precision of the measurement instrument, can affect the results.

3. Does the measurement setup determine the reality of spin measurement outcomes?

No, the measurement setup does not determine the reality of spin measurement outcomes. The outcome of a spin measurement is determined by the intrinsic properties of the particle being measured, and the measurement setup simply allows us to observe and quantify these properties.

4. How do scientists account for the influence of the measurement setup on spin measurement outcomes?

Scientists take great care in designing and calibrating their measurement setups to minimize any potential influence on spin measurement outcomes. They also conduct multiple measurements and analyze the data statistically to account for any experimental errors.

5. Are there any limitations to the accuracy of spin measurements due to the measurement setup?

Yes, there are limitations to the accuracy of spin measurements due to the measurement setup. These limitations can be minimized through advancements in technology and careful experimental design, but they cannot be completely eliminated due to the inherent uncertainty and complexity of quantum systems.

Similar threads

  • Quantum Interpretations and Foundations
4
Replies
114
Views
4K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
3
Views
1K
  • Quantum Interpretations and Foundations
Replies
4
Views
1K
  • Quantum Physics
Replies
12
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
Replies
13
Views
515
  • Quantum Interpretations and Foundations
Replies
12
Views
887
  • Quantum Interpretations and Foundations
2
Replies
52
Views
749
  • Quantum Interpretations and Foundations
Replies
5
Views
889
Back
Top