Measurement problem in the Ensemble interpretation

In summary: The moon is in a particular momentum eigenstate, but the ensemble interpretation does not say why it doesn't show interference. In summary, the ensemble interpretation of QM does not address the measurement problem as it only applies to ensembles of similarly prepared systems and does not consider single measurements. It may seem to prevent the need for wave-function collapse, but it does not explain the outcomes of single measurements or the quantum to classical transition. The inability to address the measurement problem is a problem in itself. Additionally, the ensemble interpretation weakens the explanatory power of physics in explaining classical phenomena.
  • #176
RockyMarciano said:
Exactly, therefore their stability upon measurement is not complete and as you say depends on energy. This is my point.
I still don't understand what is your main point behind all your posts about stability and measurement. :wideeyed:
 
Physics news on Phys.org
  • #177
Demystifier said:
I still don't understand what is your main point behind all your posts about stability and measurement. :wideeyed:
In QT as you now seem to acknowledge(in spite of your demonstrations on the contrary in #106) measurements are not completely stable, uncertainty and coupling constants are constantly adjusted to the relevant energy because of "the fact that directly measurable quantities (like scattering cross sections) depend on energy", my main point was this and also of puzzlement that even with this lack of stability measurements are possible and consistent, and we can matematically model idealized measuring tools that are conserved(intervals, inner products, etc).
 
  • #178
RockyMarciano said:
In QT as you now seem to acknowledge(in spite of your demonstrations on the contrary in #106) measurements are not completely stable, uncertainty and coupling constants are constantly adjusted to the relevant energy because of "the fact that directly measurable quantities (like scattering cross sections) depend on energy", my main point was this and also of puzzlement that even with this lack of stability measurements are possible and consistent, and we can matematically model idealized measuring tools that are conserved(intervals, inner products, etc).
So they are not completely stable, but they are quite stable. Isn't that enough for most practical purposes?
 
  • #179
Demystifier said:
And how would non-local correlations be explained by local hidden variables?

I am not sure what you're asking. They would be explained the usual way, pink and green socks always match.

vanhees71 said:
It can't be explained in this way since the Bell inequality (and related theorems) are violated by QT, and experiment shows that QT is right but not local HV theories.

No, because he is considering a hypothetical scenario that we are in 1920 but have QM experimental results, there is no Bell yet.
 
  • #180
martinbn said:
I am not sure what you're asking. They would be explained the usual way, pink and green socks always match.

No, because he is considering a hypothetical scenario that we are in 1920 but have QM experimental results, there is no Bell yet.
But Bell theorem, that certain type of correlations cannot be explained by local hidden variables, does not depend on knowledge of quantum mechanics. A good probability theorist could have derived it in the 19th century. One of Bell's points is precisely that such correlations are not like matching socks.
 
  • Like
Likes vanhees71
  • #181
Demystifier said:
So they are not completely stable, but they are quite stable. Isn't that enough for most practical purposes?
It is, and that's why I keep asking how is the instability kept small in a random quantum context for measurement dynamics so that it is quite stable for practical purposes. You said because of interactions, and in a way I guess the couplng constants are stable enough in practice as they run very slowly for different energies, but I would like to know the mechanism as it doesn't seem to be explained by the quantum axioms and principles.
 
  • #182
RockyMarciano said:
and in a way I guess the couplng constants are stable enough in practice as they run very slowly for different energies
Exactly!
 
  • #183
Yes, and in addition you define the coupling constants in question at a definite scale. For ##\alpha_{\text{em}}## in the low-energy regime, as it was defined always. I don't see any problem here. Of course, if one day we find a better theory revealing what's really behind all these constants which manifest our ignorance, it may well be that we have to redefine the definitions of our system of units again. That's indeed the nature of the natural sciences which are based on empirical facts and their theoretical analysis!
 
  • #184
Demystifier said:
But Bell theorem, that certain type of correlations cannot be explained by local hidden variables, does not depend on knowledge of quantum mechanics. A good probability theorist could have derived it in the 19th century. One of Bell's points is precisely that such correlations are not like matching socks.

I see, we are in 1920, there is no QM yet, there are lucky experiments that show the unexplained QM results, and we know Bell's theorem.

Then it will be a very big puzzle for the physicists, but in my opinion they will not find the action at a distance the most popular approach.

I am guessing that is your point, the they must conclude that there is some instantaneous action.
 
  • Like
Likes Demystifier
  • #185
martinbn said:
I am guessing that is your point, the they must conclude that there is some instantaneous action.
Yes.
 
  • #186
Well, and that would immediately tell them that this interpretation is inconsistent with relativistic space-time structure, and since there were very clever people in the past who could not live with such obvious contradictions in the theoretical picture of the world that we have relativistic QFT today and don't use problematic classical prejudices to describe the world.
 
  • #187
vanhees71 said:
Well, and that would immediately tell them that this interpretation is inconsistent with relativistic space-time structure, and since there were very clever people in the past who could not live with such obvious contradictions in the theoretical picture of the world that we have relativistic QFT today and don't use problematic classical prejudices to describe the world.

His point is that at the time relativity was relatively new and not so firm in their way of thinking so there would have been at least some who would consider the possibility of action at a distance.
 
  • Like
Likes Demystifier
  • #188
Don't underestimate our "old heroes" like Einstein who understood their relativity very well (at least after 1908 when Minkowski brought mathematical order into the game)!
 
  • #189
vanhees71 said:
Don't underestimate our "old heroes" like Einstein who understood their relativity very well (at least after 1908 when Minkowski brought mathematical order into the game)!
That leads to another interesting counter-factual question. How would Einstein interpret QM today, after being familiar with Bell theorem and experiments that rule out local hidden variables?
 
  • #190
Well, there are two possibilities:

(a) Einstein maybe could get more and more convinced that Q(F)T might be more complete than he thought when writing the EPR paper
(b) Einstein maybe could think that Q(F)T is even worse than he thought when writing the EPR paper and the more vigorously look for a classical unified field theory, but then knowing that he'd look for a non-local theory, which doesn't simplify the task.

That's of course speculation ;-).
 
  • #191
Demystifier said:
So what can ensemble interpretation say about the measurement problem of single measurements?

Let's pick some simple example, say, measuring a normalized state a|0> + b|1> in the computational basis. There is no repetition, nor are there many identically prepared states. You make the measurement exactly once.

What the ensemble interpretation says is that the Rules of Quantum Mechanics describe the statistical behavior of a conceptual ensemble of identically prepared states. In a fraction |a|² of them the measurement yields |0>, and in a fraction |b|² it yields |1>. Thus, in good frequentist fashion, the probability that a single measurement will yield |0> is |a|².

What this says about a measurement problem depends on what one found problematic about measurements in the first place. In any event, it seems to me that the ensemble interpretation is not an "interpretation" in the same vein that many worlds or Bohmian mechanics are interpretations. It doesn't aim to provide a "classical" underlying model whence the laws of quantum mechanics follow. The goal is to provide a well-defined shut-up-and-calculate recipe. As such, it is compatible with a hidden variable model should one desire one, such as the Bohmian mechanics I believe you favor.
 
  • #192
The ensemble representation simply says that with the probability given by Born's rule you get the corresponding results when measuring the observable, no more no less. Within the ensemble representation, which takes the probabilistic properties of nature according to QT really seriously, it doesn't make any sense to ask, why you get a specific certain outcome for a given single measurement. That you must get a certain outcome is due to the construction of the measurement device. If it wouldn't lead to definite outcomes for measurements, it wouldn't be defining a measurement accurately enough. In this case you have to use some error analysis related to your measurement apparatus, which has nothing to do with the probability inherent in nature due to QT but it's just using a "bad" measurement apparatus.
 
  • #193
vanhees71 said:
The ensemble representation simply says that with the probability given by Born's rule you get the corresponding results when measuring the observable, no more no less. Within the ensemble representation, which takes the probabilistic properties of nature according to QT really seriously, it doesn't make any sense to ask, why you get a specific certain outcome for a given single measurement. That you must get a certain outcome is due to the construction of the measurement device. If it wouldn't lead to definite outcomes for measurements, it wouldn't be defining a measurement accurately enough. In this case you have to use some error analysis related to your measurement apparatus, which has nothing to do with the probability inherent in nature due to QT but it's just using a "bad" measurement apparatus.
Ok, but the what is hard to understand is that the probability ineherent in nature due to QT has nothing to do with the error analysis of the measuring apparatus when one starts from the premise that measurements apparatus are part of nature and are therefore also quantum, and also when the Born rule is as much about probability as about measurements and doesn't distinguish measuring devices from other objects. , so why would one separate quantum measurements from the costruction of the measurements devices, are these devices not quantum perhaps, is there something in their construction or their functioning that scapes QT?
 
Last edited:
  • #194
Of course, measurement devices are as quantum as any matter. Nothing in what I said above implies something else.
 
  • #195
vanhees71 said:
Of course, measurement devices are as quantum as any matter. Nothing in what I said above implies something else.
You said that defining a measurement accurately enough to be of use(measurement uncertainty) has nothing to do with the probability inherent to nature in QT, why is this if measurement devices are as quantum as anything else? Measurements are a kind of interactions, are these interactions not quantum?
 
  • #196
Of course the measurment device, the measured obeject, and their interaction are all described by QT, but where is in your opinion a principle problem with being able to construct a measurement apparatus that measures, e.g., the position of an electron very accurately?
 
  • #197
vanhees71 said:
where is in your opinion a principle problem with being able to construct a measurement apparatus that measures, e.g., the position of an electron very accurately?
I wouldn't formulate the question in such classical terms, as they can be very misleading by suggesting small balls and trajectories. I don't see any problem in principle to have a measurement apparatus that can measure a quantum field configuration that localizes its strength to an accuracy corresponding to the energy employed, much like colliders do.

But this is not related to my question why would quantum measurement uncertainty have nothing to do with the inherent quantum uncertainty/probability.
 
  • #198
The uncertainty relation is not about an uncertainty in measurements but about the uncertainty in being principly unable to prepare a system in a state in which two incompatible observables take accurate values. E.g., the position-momentum uncertainty relation, ##\Delta x \Delta p_x \geq \hbar/2## tells you that you cannot find a state, for which both the standard deviations of position components and momentum components (in the same direction) become arbitrarily small. Of course, you can find states with arbitrarily small ##\Delta x##, but then ##\Delta p_x## must be at least as large as given by the uncertainty relation (and vice versa).
 
  • #199
The ensemble in the ensemble interpretation can be phenomenologically adscribed to the uncertainty relations or to the measurement uncertainty, no?
 
  • #200
Demystifier said:
Exactly!
But my quible was that no matter how small the instability or the drift is it should build up with time for the measuring tools, increasing the error instead of kipping it constant since it would be a systematic error, at least according to Schrodinger's equation. Instead of that quantum statistical mechanics mixes this uncertainty with the random error inherent to the statisitical atomic theory and cancels out the uncertainties so that they are not distinguishable from the randomization in classical statistical mechanics except for the different distributions that are obtained in the cases with spin.
So as a matter of fact between measurements the uncertainty and the dispersion increases in a deterministic and systematic way as shown in the Schrodinger equation and its dispersion relations, and we are left with the not for well known less puzzling situation that if we don't look the uncertainty builds up regardless of the considerations of quantum statisitical mechanics but if we look(performing consecutive measurements) the uncertainty is stable and keeps a stable macroscopic picture.
 

Similar threads

  • Quantum Interpretations and Foundations
3
Replies
84
Views
2K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
  • Quantum Interpretations and Foundations
Replies
19
Views
1K
  • Quantum Interpretations and Foundations
3
Replies
91
Views
5K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
2K
  • Quantum Interpretations and Foundations
Replies
4
Views
293
  • Quantum Interpretations and Foundations
3
Replies
90
Views
7K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
4K
  • Quantum Interpretations and Foundations
Replies
17
Views
2K
Back
Top