# Measurement problem in the Ensemble interpretation

• A
Demystifier
Gold Member
Exactly, therefore their stability upon measurement is not complete and as you say depends on energy. This is my point.
I still don't understand what is your main point behind all your posts about stability and measurement.

I still don't understand what is your main point behind all your posts about stability and measurement.
In QT as you now seem to acknowledge(in spite of your demonstrations on the contrary in #106) measurements are not completely stable, uncertainty and coupling constants are constantly adjusted to the relevant energy because of "the fact that directly measurable quantities (like scattering cross sections) depend on energy", my main point was this and also of puzzlement that even with this lack of stability measurements are possible and consistent, and we can matematically model idealized measuring tools that are conserved(intervals, inner products, etc).

Demystifier
Gold Member
In QT as you now seem to acknowledge(in spite of your demonstrations on the contrary in #106) measurements are not completely stable, uncertainty and coupling constants are constantly adjusted to the relevant energy because of "the fact that directly measurable quantities (like scattering cross sections) depend on energy", my main point was this and also of puzzlement that even with this lack of stability measurements are possible and consistent, and we can matematically model idealized measuring tools that are conserved(intervals, inner products, etc).
So they are not completely stable, but they are quite stable. Isn't that enough for most practical purposes?

martinbn
And how would non-local correlations be explained by local hidden variables?
I am not sure what you're asking. They would be explained the usual way, pink and green socks always match.

It can't be explained in this way since the Bell inequality (and related theorems) are violated by QT, and experiment shows that QT is right but not local HV theories.
No, because he is considering a hypothetical scenario that we are in 1920 but have QM experimental results, there is no Bell yet.

Demystifier
Gold Member
I am not sure what you're asking. They would be explained the usual way, pink and green socks always match.

No, because he is considering a hypothetical scenario that we are in 1920 but have QM experimental results, there is no Bell yet.
But Bell theorem, that certain type of correlations cannot be explained by local hidden variables, does not depend on knowledge of quantum mechanics. A good probability theorist could have derived it in the 19th century. One of Bell's points is precisely that such correlations are not like matching socks.

vanhees71
So they are not completely stable, but they are quite stable. Isn't that enough for most practical purposes?
It is, and that's why I keep asking how is the instability kept small in a random quantum context for measurement dynamics so that it is quite stable for practical purposes. You said because of interactions, and in a way I guess the couplng constants are stable enough in practice as they run very slowly for different energies, but I would like to know the mechanism as it doesn't seem to be explained by the quantum axioms and principles.

Demystifier
Gold Member
and in a way I guess the couplng constants are stable enough in practice as they run very slowly for different energies
Exactly!

vanhees71
Gold Member
2019 Award
Yes, and in addition you define the coupling constants in question at a definite scale. For ##\alpha_{\text{em}}## in the low-energy regime, as it was defined always. I don't see any problem here. Of course, if one day we find a better theory revealing what's really behind all these constants which manifest our ignorance, it may well be that we have to redefine the definitions of our system of units again. That's indeed the nature of the natural sciences which are based on empirical facts and their theoretical analysis!

martinbn
But Bell theorem, that certain type of correlations cannot be explained by local hidden variables, does not depend on knowledge of quantum mechanics. A good probability theorist could have derived it in the 19th century. One of Bell's points is precisely that such correlations are not like matching socks.
I see, we are in 1920, there is no QM yet, there are lucky experiments that show the unexplained QM results, and we know Bell's theorem.

Then it will be a very big puzzle for the physicists, but in my opinion they will not find the action at a distance the most popular approach.

I am guessing that is your point, the they must conclude that there is some instantaneous action.

Demystifier
Demystifier
Gold Member
I am guessing that is your point, the they must conclude that there is some instantaneous action.
Yes.

vanhees71
Gold Member
2019 Award
Well, and that would immediately tell them that this interpretation is inconsistent with relativistic space-time structure, and since there were very clever people in the past who could not live with such obvious contradictions in the theoretical picture of the world that we have relativistic QFT today and don't use problematic classical prejudices to describe the world.

martinbn
Well, and that would immediately tell them that this interpretation is inconsistent with relativistic space-time structure, and since there were very clever people in the past who could not live with such obvious contradictions in the theoretical picture of the world that we have relativistic QFT today and don't use problematic classical prejudices to describe the world.
His point is that at the time relativity was relatively new and not so firm in their way of thinking so there would have been at least some who would consider the possibility of action at a distance.

Demystifier
vanhees71
Gold Member
2019 Award
Don't underestimate our "old heroes" like Einstein who understood their relativity very well (at least after 1908 when Minkowski brought mathematical order into the game)!

Demystifier
Gold Member
Don't underestimate our "old heroes" like Einstein who understood their relativity very well (at least after 1908 when Minkowski brought mathematical order into the game)!
That leads to another interesting counter-factual question. How would Einstein interpret QM today, after being familiar with Bell theorem and experiments that rule out local hidden variables?

vanhees71
Gold Member
2019 Award
Well, there are two possibilities:

(a) Einstein maybe could get more and more convinced that Q(F)T might be more complete than he thought when writing the EPR paper
(b) Einstein maybe could think that Q(F)T is even worse than he thought when writing the EPR paper and the more vigorously look for a classical unified field theory, but then knowing that he'd look for a non-local theory, which doesn't simplify the task.

That's of course speculation ;-).

So what can ensemble interpretation say about the measurement problem of single measurements?
Let's pick some simple example, say, measuring a normalized state a|0> + b|1> in the computational basis. There is no repetition, nor are there many identically prepared states. You make the measurement exactly once.

What the ensemble interpretation says is that the rules of quantum mechanics describe the statistical behavior of a conceptual ensemble of identically prepared states. In a fraction |a|² of them the measurement yields |0>, and in a fraction |b|² it yields |1>. Thus, in good frequentist fashion, the probability that a single measurement will yield |0> is |a|².

What this says about a measurement problem depends on what one found problematic about measurements in the first place. In any event, it seems to me that the ensemble interpretation is not an "interpretation" in the same vein that many worlds or Bohmian mechanics are interpretations. It doesn't aim to provide a "classical" underlying model whence the laws of quantum mechanics follow. The goal is to provide a well-defined shut-up-and-calculate recipe. As such, it is compatible with a hidden variable model should one desire one, such as the Bohmian mechanics I believe you favor.

vanhees71
Gold Member
2019 Award
The ensemble representation simply says that with the probability given by Born's rule you get the corresponding results when measuring the observable, no more no less. Within the ensemble representation, which takes the probabilistic properties of nature according to QT really seriously, it doesn't make any sense to ask, why you get a specific certain outcome for a given single measurement. That you must get a certain outcome is due to the construction of the measurement device. If it wouldn't lead to definite outcomes for measurements, it wouldn't be defining a measurement accurately enough. In this case you have to use some error analysis related to your measurement apparatus, which has nothing to do with the probability inherent in nature due to QT but it's just using a "bad" measurement apparatus.

The ensemble representation simply says that with the probability given by Born's rule you get the corresponding results when measuring the observable, no more no less. Within the ensemble representation, which takes the probabilistic properties of nature according to QT really seriously, it doesn't make any sense to ask, why you get a specific certain outcome for a given single measurement. That you must get a certain outcome is due to the construction of the measurement device. If it wouldn't lead to definite outcomes for measurements, it wouldn't be defining a measurement accurately enough. In this case you have to use some error analysis related to your measurement apparatus, which has nothing to do with the probability inherent in nature due to QT but it's just using a "bad" measurement apparatus.
Ok, but the what is hard to understand is that the probability ineherent in nature due to QT has nothing to do with the error analysis of the measuring apparatus when one starts from the premise that measurements apparatus are part of nature and are therefore also quantum, and also when the Born rule is as much about probability as about measurements and doesn't distinguish measuring devices from other objects. , so why would one separate quantum measurements from the costruction of the measurements devices, are these devices not quantum perhaps, is there something in their construction or their functioning that scapes QT?

Last edited:
vanhees71
Gold Member
2019 Award
Of course, measurement devices are as quantum as any matter. Nothing in what I said above implies something else.

Of course, measurement devices are as quantum as any matter. Nothing in what I said above implies something else.
You said that defining a measurement accurately enough to be of use(measurement uncertainty) has nothing to do with the probability inherent to nature in QT, why is this if measurement devices are as quantum as anything else? Measurements are a kind of interactions, are these interactions not quantum?

vanhees71
Gold Member
2019 Award
Of course the measurment device, the measured obeject, and their interaction are all described by QT, but where is in your opinion a principle problem with being able to construct a measurement apparatus that measures, e.g., the position of an electron very accurately?

where is in your opinion a principle problem with being able to construct a measurement apparatus that measures, e.g., the position of an electron very accurately?
I wouldn't formulate the question in such classical terms, as they can be very misleading by suggesting small balls and trajectories. I don't see any problem in principle to have a measurement apparatus that can measure a quantum field configuration that localizes its strength to an accuracy corresponding to the energy employed, much like colliders do.

But this is not related to my question why would quantum measurement uncertainty have nothing to do with the inherent quantum uncertainty/probability.

vanhees71
Gold Member
2019 Award
The uncertainty relation is not about an uncertainty in measurements but about the uncertainty in being principly unable to prepare a system in a state in which two incompatible observables take accurate values. E.g., the position-momentum uncertainty relation, ##\Delta x \Delta p_x \geq \hbar/2## tells you that you cannot find a state, for which both the standard deviations of position components and momentum components (in the same direction) become arbitrarily small. Of course, you can find states with arbitrarily small ##\Delta x##, but then ##\Delta p_x## must be at least as large as given by the uncertainty relation (and vice versa).

The ensemble in the ensemble interpretation can be phenomenologically adscribed to the uncertainty relations or to the measurement uncertainty, no?

Exactly!
But my quible was that no matter how small the instability or the drift is it should build up with time for the measuring tools, increasing the error instead of kipping it constant since it would be a systematic error, at least according to Schrodinger's equation. Instead of that quantum statistical mechanics mixes this uncertainty with the random error inherent to the statisitical atomic theory and cancels out the uncertainties so that they are not distinguishable from the randomization in classical statistical mechanics except for the different distributions that are obtained in the cases with spin.
So as a matter of fact between measurements the uncertainty and the dispersion increases in a deterministic and systematic way as shown in the Schrodinger equation and its dispersion relations, and we are left with the not for well known less puzzling situation that if we don't look the uncertainty builds up regardless of the considerations of quantum statisitical mechanics but if we look(performing consecutive measurements) the uncertainty is stable and keeps a stable macroscopic picture.