I How Does Quantum Mechanics Relate to Quantum Field Theory in Particle Physics?

  • I
  • Thread starter Thread starter fxdung
  • Start date Start date
  • Tags Tags
    Qft Qm Relation
  • #151
vanhees71 said:
Of course, we all know, how it is defined,
$$\langle E \rangle=\langle \psi|\hat{H} \langle \psi=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \psi^*(x) \left [-\frac{\Delta}{2m}+V(x) \right ] \psi(x).$$
This is unambiguously defined in the mathematical foundations and has nothing to do with any "interpretation weirdness".
The question was not how it is defined (which is part of the QM calculus) but why Born's rule which just says that ##|\psi(x)|^2## is the probability density of ##x## implies that the right hand side is the expected measurement value of ##H##. It is used everywhere but is derived nowhere, it seems to me.
 
Physics news on Phys.org
  • #152
That's the strength of Dirac's formulation compared to the wave-mechanics approach. A pure state is represented by a normalized state vector ##|\psi \rangle## (more precisely the ray, but that's irrelevant for this debate). Then ##\hat{H}## has a complete set of (generalized) eigenvectors ##|E \rangle## (let's for simplicity also forget about the common case that the Hamiltonian is non-degenerate). Then the probability that a system prepared in this state has energy ##E## is according to the Born rule given by
$$P(E)=|\langle E|\psi \rangle|^2,$$
and thus
$$\langle E \rangle = \sum_E P(E) E=\sum_E \langle \psi|E \rangle \langle E|\hat{H} \psi \rangle = \langle \psi|\hat{H} \psi \rangle.$$
The latter expression can now be written in any other representation you like. In the position representation you have, e.g.,
$$\langle E \rangle = \int \mathrm{d}^3 \vec{x}_1 \int \mathrm{d}^3 \vec{x}_2 \langle \psi |\vec{x}_1 \rangle \langle \vec{x}_1|\hat{H}|\vec{x}_2 \rangle \langle \vec{x}_2 |\psi \rangle=\int \mathrm{d}^3 \vec{x}_2 \int \mathrm{d}^3 \vec{x}_2 \psi^*(\vec{x}_1) H(\vec{x}_1,\vec{x}_2) \psi(\vec{x}_2).$$
Now you only have to calculate the matrix element. For the potential it's very simple:
$$V(\vec{x}_1,\vec{x}_2)=\langle \vec{x}_1|V(\hat{x})|\vec{x}_2 \rangle=V(x_2) \delta^{(3)}(\vec{x}_1-\vec{x}_2).$$
For the kinetic part, it's a bit more complicated, but also derivable from the Heisenberg algebra of position and momentum operators.

The first step is to prove
$$\langle \vec{x}|\vec{p} \rangle=\frac{1}{(2 \pi)^{3/2}} \exp(\mathrm{i} \vec{x} \cdot \vec{p}).$$
For simplicity I do this only for the 1-component of position and momentum. That the simultaneous generalized eigenvector of all three momentum components factorizes is clear.

Since ##\hat{p}## is the generator of spatial translations, it's intuitive to look at the operator
$$\hat{X}(\xi)=\exp(\mathrm{i} \xi \hat{p}) \hat{x} \exp(-\mathrm{i} \xi \hat{p}).$$
Taking the derivative wrt. ##\xi## it follows
$$\frac{\mathrm{d}}{\mathrm{d} \xi} \hat{X}(\xi)=-\mathrm{i} \exp(\mathrm{i} \xi \hat{p}) [\hat{x},\hat{p}] \exp(-\mathrm{i} \xi \hat{p}).$$
From the Heisenberg commutation relations this gives
$$\frac{\mathrm{d}}{\mathrm{d} \xi} \hat{X}=1 \; \Rightarrow \; \hat{X}=\hat{x}+\xi \hat{1}.$$
So we have
$$\hat{x} \exp(-\mathrm{i} \xi \hat{p}) |x=0 \rangle=\exp(-\mathrm{i} \xi \hat{p}) \hat{X}(\xi) |x=0 \rangle=\xi \exp \exp(-\mathrm{i} \xi \hat{p}) |x=0 \rangle.$$
Then you have
$$\langle x|p \rangle=\langle \exp(-\mathrm{i} x \hat{p}) x=0|p \rangle=\langle x=0|p \rangle \exp(+\mathrm{i} p x)=N_p \exp(\mathrm{i} p x).$$
The constant ##N_p## is determined by the normalization of the momentum eigenstate as
$$\langle p|p' \rangle=\delta(p-p')=\int \mathrm{d} x \langle p|x \rangle \langle x|p' \rangle=\int \mathrm{d} x N_{p}^* N_p \exp[\mathrm{i}x(p'-p)]=2 \pi |N_{p}|^2 \delta(p-p') \; \Rightarrow \; N_{p}=\frac{1}{\sqrt{2 \pi}}.$$
Of course, the choice of phase is arbitrary.

Now we can also evaluate the expectation value of kinetic energy easily
$$\left \langle \frac{\vec{p}^2}{2m} \right \rangle=\int \mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p} \frac{p^2}{2m} \langle \psi|\vec{p} \rangle \langle \vec{p}| \vec{x} \rangle \langle \vec{x} |\psi \rangle=\int \mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p} \frac{p^2}{2m} \frac{1}{(2 \pi)^{3/2}} \exp(-\mathrm{i} \vec{p} \cdot \vec{x}) \langle \psi|\vec{p} \rangle \psi(x)= \int \mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p} \left [-\frac{\Delta}{2m} \frac{1}{(2 \pi)^{3/2}} \exp(-\mathrm{i} \vec{p} \cdot \vec{x}) \right ] \langle \psi|\vec{p} \rangle \psi(x) = \int \mathrm{d}^3 \vec{x} \int \mathrm{d}^3 \vec{p} \langle \psi|\vec{p} \rangle \langle \vec{p}|\vec{x} \rangle \left (\frac{-\Delta}{2m} \right) \psi(\vec{x}) = \int \mathrm{d}^3 \vec{x} \psi^*(\vec{x}) \left (-\frac{\Delta}{2m} \right) \psi(\vec{x}).$$
So it's not just written down but derived from the fundamental postulates + the specific realization of a quantum theory based on the Heisenberg algebra. To derive the latter from the Galilei group alone is a bit more lengthy. See Ballentine, Quantum Mechanics for that issue (or my QM 2 lecture notes which, however, are in Germany only: http://fias.uni-frankfurt.de/~hees/publ/hqm.pdf ).
 
  • #153
vanhees71 said:
That's the strength of Dirac's formulation compared to the wave-mechanics approach.[...] So it's not just written down but derived from the fundamental postulates
From Dirac's postulates (and only if ##H## has no continuous spectrum) but not from Born's. Are Dirac's postulates somewhere available online?
 
Last edited:
  • #154
But there is only one set of postulates in the standard (aka Dirac-von Neumann) formulation. The statistical postulate is:
1. The set of experimentally obtained values of observable A are the spectral values of the self-adjoint operator Â
2. If the state of the system for which one measures A is {p_k, \psi_k}, then the probability to get a_n from disc(Â) is P (a_n) = sum_k p_k <psi_k | P_n| psi_k>, while the probability density of the point alpha from the parametrization space of cont(Â) is P (alpha) = sum_k p_k <psi_k | P_alpha| psi_k>.

The projectors are defined in terms of the Dirac bra/ket spectral decomposition of Â.
 
Last edited:
  • #155
dextercioby said:
But there is only one set of postulates in the standard (aka Dirac-von Neumann) formulation. The statistical postulate is:
1. The set of experimentally obtained values of observable A are the spectral values of the self-adjoint operator Â
2. If the state of the system for which one measures A is {p_k, \psi_k}, then the probability to get a_n from disc(Â) is P (a_n) = sum_k p_k <psi_k | P_n| psi_k>, while the probability density of the point alpha from the parametrization space of cont(A) is P (alpha) = sum_k p_k <psi_k | P_alpha| psi_k>.

The projectors are defined in terms of the Dirac bra/ket spectral decomposition of Â.
This version doesn't cover the argument used by vanhees71 in case H has a continuous spectrum, where the sum must be replaced by an integral.

Which version is in Dirac's book? Or do different editions have different versions? Is one of them applicable to ##H=p^2/2m##?
 
  • #156
Of course it does. The continuous spectrum is addressed by an integration in parametrization space. The integral is Riemannian, the parametrization space is a subset of R. The spectral decomposition \Sum_n P_n + \int d alpha P_alpha = Î. This expression makes sense in the rigged Hilbert space formulation of QM, advocated by Arno Böhm and his coworkers.
 
  • #157
dextercioby said:
Of course it does. The continuous spectrum is addressed by an integration in parametrization space.
The question is whether the continuous case (with a Stieltjes integral in place of the sum and the interpretation of matrix elements as probability densities) is in the postulates as formulated by Dirac, or if it is just proceeding by analogy - which would mean that the foundations were not properly formulated.

The rigged Hilbert space is much later than Dirac I think - Gelfand 1964?
 
  • #158
A. Neumaier said:
The question is whether the continuous case (with a Stieltjes integral in place of the sum and the interpretation of matrix elements as probability densities) is in the postulates as formulated by Dirac, or if it is just proceeding by analogy - which would mean that the foundations were not properly formulated.

The rigged Hilbert space is much later than Dirac I think - Gelfand 1964?

No, the foundations were properly formulated by von Neumann in 1931, indeed using Stieltjes integrals to define the spectral measures. Dirac's book of 1958 has no precise statement of a set of axioms, yet it has been customary to denote the standard axioms by the name of Dirac and von Neumann (especially for the state reduction/collapse axiom).

The rigged Hilbert spaces were invented by Gel'fand and Kostyuchenko in 1955 and described at large in the 1961 book (4th volume of the famous generalized functions) which was translated to English in 1964. It is not known to me if Arno Böhm knew Russian, it may have been that the book had been first translated to German, or simply someone helped with the translation from Russian. The first use of RHS to QM was made by Arno Böhm in 1964 in a preprint (unfortunately poorly scanned) at the International Center of Theoretical Physics in Trieste.
 
  • #159
A. Neumaier said:
The question is whether the continuous case (with a Stieltjes integral in place of the sum and the interpretation of matrix elements as probability densities) is in the postulates as formulated by Dirac, or if it is just proceeding by analogy - which would mean that the foundations were not properly formulated.

The rigged Hilbert space is much later than Dirac I think - Gelfand 1964?

There are two types of foundations - physical and mathematical. Throughout, I have meant physical while you have often meant mathematical.

The physical foundations were properly formulated by Bohr, Dirac, Heisenberg and von Neumann. Each took a slightly different view, but the key point is that quantum mechanics is a practical operational theory which only makes probabilistic predictions. The wave function, collapse etc are not real. And most importantly, quantum mechanics has a measurement problem.

The mathematical foundations were not complete at the time of von Neumann. POVMs and collapse for continuous variables came later. However, these mathematical tidying up changed no physical concept.
 
  • Like
Likes Demystifier
  • #160
A. Neumaier said:
The question was not how it is defined (which is part of the QM calculus) but why Born's rule which just says that ##|\psi(x)|^2## is the probability density of ##x## implies that the right hand side is the expected measurement value of ##H##. It is used everywhere but is derived nowhere, it seems to me.

No, of course you cannot derive it from the literal Born rule. When we say Born rule nowadays, we mean the generalization, eg. Dirac, von Neumann, and later work, and eg. what vanhees71 did in post #152.
 
  • Like
Likes Demystifier
  • #161
A. Neumaier said:
From Dirac's postulates (and only if ##H## has no continuous spectrum) but not from Born's. Are Dirac's postulates somewhere available online?
I don't know, what you mean by Dirac's vs. Born's postulates. I think the best source for Dirac's point of view is still his famous textbook. What's known as Born's rule is that the modulus squared of the wave function, no matter with respect to which basis, gives the probabilities for the discrete values and probability distributions for the continuous values of the spectrum of the self-adjoint operator. Dirac's handling of distributions (in the sense of generalized functions) was a la physics, i.e., no rigorous. Making it rigorous lead the mathematicians to the development of modern functional analysis. The first mathematically rigorous formulation in form of Hilbert-space theory goes back to John von Neumann, but his physics is a catastrophe, leading to a lot of esoterical debates concerning interpretation. His interpretation is Copenhagen + necessity of a conscious being to take note of the result of a measurement. So it's solipsism in some sense and lead to the famous question by Bell, when the first "collapse" might have happened after the big bang, whether an amoeba is enough to observe something or whether you need some more "conscious" being like a mammal or a human ;-)).
 
  • #162
vanhees71 said:
The first mathematically rigorous formulation in form of Hilbert-space theory goes back to John von Neumann, but his physics is a catastrophe, leading to a lot of esoterical debates concerning interpretation.
Now I am confused. Do you consider his interpretation in terms of consciousness to be a part of physics? That's confusing because at other places you seem to claim the opposite, that such interpretations are not physics.

Or maybe, which I would more naturally expect from you, you would like to divide his work into three aspects: mathematics, physics, and interpretation? But in that case it would not be fair to call his physics a catastrophe. His insight that measurement involves entanglement with wave functions of macroscopic apparatuses is an amazing physical insight widely adopted in modern theory of quantum measurements, irrespective of interpretations.
 
  • #163
Of course, von Neumann's interpretation is no physics but esoterics. I'm totally baffled that somebody of his caliber could come to such an idea. I think his merits concerning QT are completely mathematical, namely to have put it on a solid mathematically strict ground in terms of Hilbert-space theory (mostly in the formulation as "wave mechanics".
 
  • #164
vanhees71 said:
Of course, von Neumann's interpretation is no physics but esoterics. I'm totally baffled that somebody of his caliber could come to such an idea.
I agree on this.

vanhees71 said:
I think his merits concerning QT are completely mathematical,
But disagree on that. I think he had physical merits too.
 
  • #165
Demystifier said:
I agree on this.

But the greatness of von Neumann is that he saw clearly, like Bohr and Dirac, that Copenhagen has a measurement problem. The great merit of these physicists is that they are very concerned about physics, unlike Peres (which is a marvellous book), but is completely misleading in not stating the measurement problem clearly, and even hinting that it does not exist in the Ensemble interpretation.

Also I don't think von Neumann's idea of consciousness causing collapse is that different from Bohr or even Landau and Lifshitz's classical/quantum cut, which is a subjective cut. It's the same as Dirac agreeing that there is an observer problem - somehow there has to be an observer/consciousness/classical-quantum cut, which are more or less the same thing.
 
  • Like
Likes Demystifier
  • #166
That's the great miracle. After all this time people think that there is a measurement problem, but where is it when accepting the minimal interpretation?

Where is the necessity of a classical/quantum cut or a collapse? I just need real-world lab equipment and experimentalists able to handle it to do measurements on whatever system they can prepare in whatever way, make a model within QT and compare my prediction to the oustcome of the measurements. Both my prediction and the mesurements are probabilistic and statistical, respectively. The more than 90 years of application of QT to real-world experimental setups and real-world observations are a great success story So where is the real physics problem? There may be a problem in some metaphysical sense, depending on the believe or world view of the one or the other physicist, but no problem concerning the natural-science side of affairs.
 
  • #167
vanhees71 said:
That's the great miracle. After all this time people think that there is a measurement problem, but where is it when accepting the minimal interpretation?

Where is the necessity of a classical/quantum cut or a collapse? I just need real-world lab equipment and experimentalists able to handle it to do measurements on whatever system they can prepare in whatever way, make a model within QT and compare my prediction to the oustcome of the measurements. Both my prediction and the mesurements are probabilistic and statistical, respectively. The more than 90 years of application of QT to real-world experimental setups and real-world observations are a great success story So where is the real physics problem? There may be a problem in some metaphysical sense, depending on the believe or world view of the one or the other physicist, but no problem concerning the natural-science side of affairs.

A simple way to see it is that even in the minimal interpretation, one has deterministic unitary evolution and probabilistic evolution due to the Born rule. If one extends deterministic evolution to the whole universe, then there is no room for probability. So the wave function cannot extend to the whole universe. Deciding where it stops, and when the boundary between deterministic evolution and stochastic evolution is is the classical/quantum cut.
 
  • #168
I've never claimed that QT is applicable to a single "event" like the "entire universe" ;-)).
 
  • #169
Minimal ensemble interpretation is not a solution of the measurement problem. It is a clever way of avoiding talk about the measurement problem.
 
  • #170
vanhees71 said:
I've never claimed that QT is applicable to a single "event" like the "entire universe" ;-)).
How about single electron?
 
  • #171
QT makes probabilistic predictions about the behavior of a single electron. You can take a single electron and prepare it very often in the same state and statistically analyse the result to test the probabilistic predictions. A single measurement on a single electron doesn't tell much concerning the validity of the probabilistic predictions.
 
  • Like
Likes Demystifier
  • #172
vanhees71 said:
I've never claimed that QT is applicable to a single "event" like the "entire universe" ;-)).

Yes, so one needs an ensemble of subsystems of the universe. The choice of subsystem is the classical/quantum cut.
 
  • #173
This is a bit too short an answer to be convincing. Why is choosing a subsystem of the universe the classical/quantum cut? Matter as we know it cannot be described completely by classical physics at all. So how can just taking a lump of matter as the choice of a subsystem define a classical/quantum cut?
 
  • #174
vanhees71 said:
This is a bit too short an answer to be convincing. Why is choosing a subsystem of the universe the classical/quantum cut? Matter as we know it cannot be described completely by classical physics at all. So how can just taking a lump of matter as the choice of a subsystem define a classical/quantum cut?

Well, if you agree that quantum mechanics cannot describe the whole universe, but it can describe subsystems of it, then it seems that at some point quantum mechanics stops working.
 
  • Like
Likes Demystifier
  • #175
vanhees71 said:
That's the great miracle. After all this time people think that there is a measurement problem, but where is it when accepting the minimal interpretation?

The Born interpretation itself to me seems to require a choice of basis before it can be applied. The rule gives the probability for obtaining various values for the results of measurements. I don't see how you can make sense of the Born rule without talking about measurements. How can you possible compare QM to experiment unless you have a rule saying: If you do such and such, you will get such and such value? (or: if you do such and such many times, the values will be distributed according to such and such probability)
 
  • #176
vanhees71 said:
This is a bit too short an answer to be convincing. Why is choosing a subsystem of the universe the classical/quantum cut? Matter as we know it cannot be described completely by classical physics at all. So how can just taking a lump of matter as the choice of a subsystem define a classical/quantum cut?

Well, I'm not sure that the cut needs to be classical/quantum, but in order to compare theory with experiment, there needs to be such a thing as "the outcome of an experiment". The theory predicts that you have a probability of P of getting outcome O, then it has to be possible to get a definite outcome in order to compile statistics and compare with the theoretical prediction. But for the subsystem described by quantum mechanics, there are no definite outcomes. The system is described by superpositions such as \alpha |\psi_1\rangle + \beta |\psi_2\rangle. So it seems to me that we distinguish between the system under study, which we treat as evolving continuously according to Schrodinger's equation, and the apparatus/detector/observer, which we treat as having definite (although nondeterministic) outcomes. That's the split that is sometimes referred to as the classical/quantum split, and it seems that something like it is necessary in interpreting quantum mechanics as a probabilistic theory.
 
  • #177
atyy said:
Well, if you agree that quantum mechanics cannot describe the whole universe, but it can describe subsystems of it, then it seems that at some point quantum mechanics stops working.
Sure. But what has this to do with the quantum/classical cut. Classical physics is also not working!
 
  • #178
stevendaryl said:
The Born interpretation itself to me seems to require a choice of basis before it can be applied. The rule gives the probability for obtaining various values for the results of measurements. I don't see how you can make sense of the Born rule without talking about measurements. How can you possible compare QM to experiment unless you have a rule saying: If you do such and such, you will get such and such value? (or: if you do such and such many times, the values will be distributed according to such and such probability)
Sure, it requires a choice of basis, but that's the choice of what you measure, because you have to choose the eigenbasis of the operator representing the observable you choose to meausure. There's nothing very surprising.

QT subscribes only to the 2nd formulation in parentheses: "if you do such and such many times, the values will be distributed according to such and such probability." That's precisely how QT in the minimal formulation works: "doing such and such" is called "preparation" in the formalism and defines what a (pure or mixed state) is, and "the values" refer to an observable you choose to measure. The prediction of QT is that in the given state the probability (distribution) to find a value of this measured observable is given by Born's rule.
 
  • #179
vanhees71 said:
Sure. But what has this to do with the quantum/classical cut. Classical physics is also not working!

Yes, classical/quantum cut does not literally mean classical. It just means where we take QM to stop working, and where we get definite outcomes.
 
  • #180
stevendaryl said:
Well, I'm not sure that the cut needs to be classical/quantum, but in order to compare theory with experiment, there needs to be such a thing as "the outcome of an experiment". The theory predicts that you have a probability of P of getting outcome O, then it has to be possible to get a definite outcome in order to compile statistics and compare with the theoretical prediction. But for the subsystem described by quantum mechanics, there are no definite outcomes. The system is described by superpositions such as \alpha |\psi_1\rangle + \beta |\psi_2\rangle. So it seems to me that we distinguish between the system under study, which we treat as evolving continuously according to Schrodinger's equation, and the apparatus/detector/observer, which we treat as having definite (although nondeterministic) outcomes. That's the split that is sometimes referred to as the classical/quantum split, and it seems that something like it is necessary in interpreting quantum mechanics as a probabilistic theory.
Sure, but where is there a problem? The very success of very accurate measurements in accordance with the predictions of QT shows that there is no problem. To understand how a measurement apparatus works, ask the experimentalists/engineers who invented it, which model of the apparatus they had in mind to construct it. It's almost always classical, and that the classical approximation works is shown by the very success of the apparatus to measure what it is supposed to measure.

Another question is, how to understand the classical behavior of macroscopic objects from QT, including that of measurement devices (which are, of course, themselves just macroscopic objects, obeying the same quantum laws of nature as any other). I think that this is quite well understood in terms of quantum statistics and appropriate effective coarse-grained descriptions of macroscopic observables derived from QT.
 
  • #181
atyy said:
Yes, classical/quantum cut does not literally mean classical. It just means where we take QM to stop working, and where we get definite outcomes.
You get definite outcomes and "classical behavior" for coarse-grained macroscopic variables. The microscopic details are only probabilistically described according to QT.
 
  • #182
vanhees71 said:
Sure, but where is there a problem?

The conceptual problem is how to say, rigorously, what it means for a device to measure an observable. Informally, or semi-classically, it means that the device is in a metastable state, and that a small perturbance proportional to the observable being measured will cause it to make a transition into one of a (usually discrete) number of stable pointer states. So there is physics involved in designing a good detector/measurement device, but it doesn't seem that this physics is purely quantum mechanics.
 
  • #183
vanhees71 said:
Another question is, how to understand the classical behavior of macroscopic objects from QT, including that of measurement devices (which are, of course, themselves just macroscopic objects, obeying the same quantum laws of nature as any other). I think that this is quite well understood in terms of quantum statistics and appropriate effective coarse-grained descriptions of macroscopic observables derived from QT.

I don't agree that it is well understood. Coarse graining is not going to get you from a deterministic superposition of possibilities to one possibility selected randomly out of the set.
 
  • #184
vanhees71 said:
Sure, it requires a choice of basis, but that's the choice of what you measure, because you have to choose the eigenbasis of the operator representing the observable you choose to meausure. There's nothing very surprising.

But you don't choose a basis, you construct a measurement device. In what sense does a measurement device choose a basis? Only in the sense that the measurement device amplifies microscopic differences in one basis so that they become macroscopic differences. The treatment of macroscopic differences is completely unlike the treatment of microscopic differences in standard quantum mechanics. At the microscopic level, an electron can be in a superposition of spin-up and spin-down. But if we have a spin measurement, the result of which is a pointer that points to the word "Up" for spin-up and "Down" for spin-down, then we don't consider superpositions of those possibilities, we get one or the other.
 
  • #185
stevendaryl said:
I don't agree that it is well understood. Coarse graining is not going to get you from a deterministic superposition of possibilities to one possibility selected randomly out of the set.
As I had said before, people working in statistical mechanics do not use the eigenvalue-eigenstate link to measurement but the postulates that I had formulated (though they are not explicit about these). This is enough to get a unique macroscopic measurement result (within experimental error).
 
  • Like
Likes vanhees71
  • #186
vanhees71 said:
You get definite outcomes and "classical behavior" for coarse-grained macroscopic variables. The microscopic details are only probabilistically described according to QT.

No, once you apply the Born rule, you already transition into definite outcomes. Each outcome is definite after you get it, but for identically prepared systems the definite outcomes are distributed according to the Born rule.

So it is not correct to solve the problem by coarse graining after the Born rule is applied, since there is no problem once the Born rule is applied.

The question is: who determines when a measurement is made, ie, who determines when the Born rule is applied?
 
  • #187
stevendaryl said:
But you don't choose a basis, you construct a measurement device. In what sense does a measurement device choose a basis? Only in the sense that the measurement device amplifies microscopic differences in one basis so that they become macroscopic differences. The treatment of macroscopic differences is completely unlike the treatment of microscopic differences in standard quantum mechanics. At the microscopic level, an electron can be in a superposition of spin-up and spin-down. But if we have a spin measurement, the result of which is a pointer that points to the word "Up" for spin-up and "Down" for spin-down, then we don't consider superpositions of those possibilities, we get one or the other.
A measurement device chooses the basis because it measures the observable it is constructed for. Of course, to explain any real-world measurement device in all microscopic details with quantum mechanics (or even relativistic quantum field theory) is of course impossible and obviously not necessary to construct some very accurate measurement devices like big detectors at the LHC, photon detectors in quantum-optics labs etc. etc.
 
  • #188
atyy said:
No, once you apply the Born rule, you already transition into definite outcomes. Each outcome is definite after you get it, but for identically prepared systems the definite outcomes are distributed according to the Born rule.

So it is not correct to solve the problem by coarse graining after the Born rule is applied, since there is no problem once the Born rule is applied.

The question is: who determines when a measurement is made, ie, who determines when the Born rule is applied?
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.
 
  • #189
vanhees71 said:
A measurement device chooses the basis because it measures the observable it is constructed for. Of course, to explain any real-world measurement device in all microscopic details with quantum mechanics (or even relativistic quantum field theory) is of course impossible and obviously not necessary to construct some very accurate measurement devices like big detectors at the LHC, photon detectors in quantum-optics labs etc. etc.

Yeah, it's difficult to give a complete quantum mechanical description of a macroscopic object, but we don't need a complete description to know that you're not going to get definite results from a theory that predicts smooth unitary evolution.
 
  • #190
vanhees71 said:
Where is here the measurement problem?

The measurement problem is to explain how we get definite results for a macroscopic system, instead of smooth evolution of probability amplitudes. You can say that it's because of the enormous number of details involved in a realistic measurement, but I don't see how the number of particles involved can make a difference. Whether you have one particle or two or 10^{10^{10}}, if quantum mechanics applies, then the evolution will be smooth and unitary.
 
  • #191
vanhees71 said:
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.

Mathematically, the state space of quantum mechanics is not a simplex. In the Ensemble interpretation, this means that an ensemble does not have a unique division into sub-ensembles. This lack of uniqueness is the lack of a definite reality.

In contrast, the state space of a classical probability theory is a simplex. In the Ensemble interpretation, this means that an ensemble has a unique division into sub-ensembles. This means we can say there is a definite reality of which we are ignorant.

http://arxiv.org/abs/1112.2347
"The simplex is the only convex set which is such that a given point can be written as a mixture of pure states in one and only one way."
 
  • #192
vanhees71 said:
I think, I'm still not able to make this very simple argument clear. Let's try on the paradigmatic example of measuring the spin with the Stern-Gerlach experiment (in non-relativistic approximation).

You shoot (an ensemble of) single particles through an inhomogeneous magnetic field with a large static component in ##z##-direction. According to quantumtheoretical calculations with the Pauli equation (Schrödinger equation for a spin-1/2 particle with a magnetic moment) you get a position-spin entangled state where particles in one region are (almost) 100% in the spin state with ##\sigma_z=+1/2## and those in another macroscopically well separated region with ##\sigma_1=-1##. Depending on the initial state (let's assume for simplicity an unpolarized source of spin-1/2 particles as in Stern's and Gerlach's original experiment, where they used a little oven with silver vapour) you get the particle with some probability (in our case 1/2) to be deflected in one or the other direction. So you measure with this probability ##\sigma_z=1/2## and with the corresponding complementary probability ##\sigma_z=-1/2##.

The measurement process itself in this case consists in putting some scintillator or CCD screen, where the particles leave a macroscopic trace to be analyzed (in the case of the original experiment sent around the world on a now famous postcard).

Where is here the measurement problem? Of course, to describe in all microscopic detail the chemistry leading to a coloured grain on the photoplate is very difficult, but it's not needed FAPP to understand the outcome of the experiment and to measure the spin component of your spin-1/2 particle in this setup. So there is FAPP no measurement problem.
What quantum mechanics (without collapse) predicts is that *everytime* you get half a silver atom in the first direction and another half of a silver atom in the second direction. That is the measurement problem.
Best.
Jim Graber
 
  • #193
atyy said:
This lack of uniqueness is the lack of a definite reality.
Only if one thinks that the pure state is the definite reality. But this is an untestable assumption.
 
Last edited:
  • #194
A. Neumaier said:
Only if one thinks that the pure state is the definite reality. But this is an untestable assuption.

But nonetheless, there are times when one has to define sub-ensembles, for example when one performs a second measurement conditioned on the result of the first. The conditioning is done on a sub-ensemble.
 
  • #195
atyy said:
But nonetheless, there are times when one has to define sub-ensembles, for example when one performs a second measurement conditioned on the result of the first. The conditioning is done on a sub-ensemble.
That's why it is far more natural to regard the mixed state as the definite reality. Decomposing it into pure states is physically meaningless. This is why I formulated my postulates for the formal core of quantum mechanics without reference to wave functions. It is completely natural, and (as demonstrated there) one can get the case of pure states as a special case if desired.
 
  • #196
stevendaryl said:
The measurement problem is to explain how we get definite results for a macroscopic system, instead of smooth evolution of probability amplitudes. You can say that it's because of the enormous number of details involved in a realistic measurement, but I don't see how the number of particles involved can make a difference. Whether you have one particle or two or 10^{10^{10}}, if quantum mechanics applies, then the evolution will be smooth and unitary.
We don't get "definite results" on the microscopic level but on the macroscopic level. The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.
 
  • #197
vanhees71 said:
We don't get "definite results" on the microscopic level but on the macroscopic level. The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.

All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".
 
  • Like
Likes Demystifier
  • #198
vanhees71 said:
We don't get "definite results" on the microscopic level but on the macroscopic level

Yes, that's what I said was the essence of the measurement problem.

The average values of a pointer status have small standard deviation in relation to the macroscopically relevant accuracy. The measurement of an observable of a quantum system like a particle is due to interaction of this system with a macroscopic apparatus, leading to entanglement between the measured observable and the pointer status, which is a coarse grained, i.e., over many microscopic states averaged quantity. The art is to "amplify" the quantum observable through this interaction sufficiently such that the macroscopic resolution of the pointer reading is sufficient to infer the value of the measured observable of the quantum system. This works in practice and thus there is no measurement problem from a physics point of view.

Hmm. It seems to me that you've said exactly what the measurement problem is. If you have a system that is in a superposition of two states, and you amplify it so that the differences become macroscopic, why doesn't that lead to a macroscopic system in a superposition of two states? Why aren't there macroscopic superpositions?

It seems to me that there are only two possible answers:
  1. There are no macroscopic superpositions. In that case, the problem would be how to explain why not.
  2. There are macroscopic superpositions. In that case, the problem would be to explain why they're unobservable, and what the meaning of Born probabilities are if there are no choices made among possibilities.
People sometimes act as if decoherence is the answer, but it's really not the complete answer. Decoherence is a mechanism by which a superposition involving a small subsystem can quickly spread to "infect" the rest of the universe. It does not solve the problem of why there are definite outcomes.
2.
 
  • #199
atyy said:
All you are doing is replacing the "classical/quantum cut" with the "macroscopic/microscopic cut".

It's the same cut. The cut that is important is such that one one side, you have superpositions of possibilities, evolving smoothly according to Schrodinger's equation. On the other side, you have definite properties: Cats are either alive or dead, not in superpositions.
 
  • #200
jimgraber said:
What quantum mechanics (without collapse) predicts is that *everytime* you get half a silver atom in the first direction and another half of a silver atom in the second direction. That is the measurement problem.
Best.
Jim Graber
No, it predicts that, repeating the experiment very often, I always measure one silver atom which in half of all cases is in the first and the other half in the second direction.
 
Back
Top