A Measurement problem in the Ensemble interpretation

  • #101
Demystifier said:
Here is why it is problematic. You simultaneously assume that
1) The measured system (particle) exists even before measurement.
2) The dynamics is local.
3) The random decision happens when the detector clicks (not before).

Indeed, each assumption by itself seems reasonable. But the problem is that they cannot all be simultaneously true. At least one must be wrong. You must give up at least one of them.

Let me explain why they cannot all be true. From 3) and 1) it follows that, immediately before the click, the system exists not only near one detector, but near both of them. But then, puff, at the time of click, the system suddenly ceases to exist near the detector that didn't click. How did this part of the system knew that the click happened near the other part? Since the two parts are spatially separated, there must have been some non-local (even if random) mechanism, which contradicts 2). Hence assumptions 1) and 3) contradict 2), which implies that it is not possible that all three assumptions are true.

And yet, you seem not be ready to give up any of the three assumptions. That's the problem.

Note that the argument above is even simpler than the Bell theorem, because the system studied above does not involve entanglement. The Bell theorem derives a contradiction by assuming 1), 2) and entanglement. The argument above derives a contradiction by assuming 1), 2) and 3).
To my mind, assumptions 1) and 2) are misleading when thinking about quantum phenomena. These assumptions are based on classical conceptions.

Regarding assumption 3), I follow J . Marburger: „We can only measure detector clicks. But when we hear the click we say “there’s an electron!” We cannot help but think of the clicks as caused by little localized pieces of stuff that we might as well call particles. This is where the particle language comes from. It does not come from the underlying stuff, but from our psychological predisposition to associate localized phenomena with particles.“ (J. Marburger, “On the Copenhagen interpretation of quantum mechanics” in Symposium on The Copenhagen Interpretation: Science and History on Stage, National Museum of Natural History of the Smithsonian Institution, 2 March 2002)
 
Last edited by a moderator:
  • Like
Likes Demystifier, AlexCaledin and vanhees71
Physics news on Phys.org
  • #102
RockyMarciano said:
How would you reconcile this with the fact that measurements must be possible in a dynamical world for science to make sense(for different local measurements to be coherent with one another) which seems to imply that at least there must be conservation laws for dynamical measuring tools?
I don't see how this implies a need for conservation laws.
 
  • #103
Demystifier said:
I don't see how this implies a need for conservation laws.
I mean that something must be conserved for measurements being valid regardless where and when they are performed and how(at which energy, etc), i.e. for the physics not to depend on any special factor in a dynamical or changing context.
 
  • #104
RockyMarciano said:
I mean that something must be conserved for measurements being valid regardless where and when they are performed and how(at which energy, etc), i.e. for the physics not to depend on any special factor.
Well, to measure a distance with a meter, the length of meter should not change. But there is no law of conservation of length. What we need here is stability, not conservation laws.
 
  • Like
Likes vanhees71
  • #105
Demystifier said:
Well, to measure a distance with a meter, the length of meter should not change. But there is no law of conservation of length. What we need here is stability, not conservation laws.
how do you maintain this stability without conservation laws in a dynamical context?
 
  • #106
RockyMarciano said:
how do you maintain this stability without conservation laws in a dynamical context?
The meter can exchange energy with the environment (e.g. it can absorb heat), so energy of the meter is not conserved. Yet, the meter as a solid object is stable. If the meter was made from liquid it would not be stable (and not useful as a meter) even when it's energy is constant. This demonstrates that stability and energy conservation are not directly related to each other.

More formally, consider a particle in a potential of the form
$$V(x,t)=\frac{kx^2}{2}+U(t)$$
where ##U(t)## is a positive non-constant function of time ##t##. Clearly this potential does not conserve energy. Yet, the particle position ##x=0## is stable, provided that ##k## is positive. If ##k## were negative the position ##x=0## would not be stable, even if ##U(t)## were zero. That's another demonstration that stability and energy conservation are not related.
 
Last edited:
  • #107
vanhees71 said:
I still don't get, what should be a problem with that. To the contrary thanks to Q(F)T we have a theory to describe such decays very well.

If there's a conserved charge, at least you know that it will be there forever in the one or the other form. To be sure that a once prepared particle is always there, of course it must be stable, because if there is only the tiniest probability for its decay, then you can never be sure that it is still there after some time. That's why it's called unstable.
Let me try to explain the problem once again. The conserved quantities (charge, energy, ...) do not change. But the role of dynamics is to describe the change. There are two types of change in QFT:
1) Changes of probabilities between detections. Those are described by local deterministic laws.
2) Clicks of detectors. Those are described by non-deterministic laws.

The 2) clearly does not reflect all the properties of 1), because 1) is deterministic and 2) is not. So, given that 2) is so fundamentally different from 1), what makes you think that 2) must be local?
 
  • #108
Now it's totally confusing. All that QT gives me is, given the preparation of the state initially, the probabilities for finding a certain value for any observable possible for the system. The click of a detector is such a measurement (e.g., for the presence of a particle in the detector). Of course, I don't know more than the probability for it to click. The click is due to interactions of the particle with the particles in the detector, governed by the same laws, so it's local.
 
  • #109
vanhees71 said:
The click is due to interactions of the particle with the particles in the detector, governed by the same laws, so it's local.
If it's due to the same laws, how can it be one of them are deterministic and other non-deterministic?

Or do you deny that unitary evolution of probability between measurements is deterministic?

Or do you claim that even between measurements something changes in a non-deterministic way? If so, what is it?
 
  • #110
I don't understand what you mean by deterministic then. The state implies only probabilities. So there's no deterministic content in it (except for the case of a precisely determined value of an observable). The (ideal) detector clicks with the probability given by the state according to Born's rule.
 
  • #111
vanhees71 said:
I don't understand what you mean by deterministic then. The state implies only probabilities. So there's no deterministic content in it (except for the case of a precisely determined value of an observable). The (ideal) detector clicks with the probability given by the state according to Born's rule.
By unitary evolution, if you know probability ##P(t)## for some ##t##, then you can calculate the probability ##P(t+\Delta t)##. That's deterministic evolution of probability. Between the clicks, probability evolves deterministically. At the moment of click, it is not so obvious whether it does or not.

Nevertheless, try to answer my last question in the post above:
Do you claim that even between measurements something changes in a non-deterministic way? If so, what is it?
 
Last edited:
  • #112
Of course between the clicks you have unitary time evolution. It's misleading to call it "deterministic", but I know what you mean. I don't know what you mean with the question whether the click is deterministic of not. It's the measurement, and about the measurement I only know probabilities. That's the whole point of saying QT is a probabilistic description, and what's probabilistic is not determined (except that the probability for the outcome of the measured observable is 100%).
 
  • #113
vanhees71 said:
Of course between the clicks you have unitary time evolution.
And how about the clicks themselves? Are they unitary too?

vanhees71 said:
It's the measurement, and about the measurement I only know probabilities.
And how about non-measurements? Can you say anything about probabilities of non-measured quantities?
 
  • #114
What do you mean by the "clicks are unitary" or "clicks are not unitary"?

As a physicist I don't need to talk about non-measured quantities since if I don't measure them, what should I be able to say about them?
 
  • #115
vanhees71 said:
What do you mean by the "clicks are unitary" or "clicks are not unitary"?
You said: "Of course between the clicks you have unitary time evolution."
In the same sense you meant that, I ask you: Do I also have unitary time evolution at the time of clicks?

vanhees71 said:
As a physicist I don't need to talk about non-measured quantities since if I don't measure them, what should I be able to say about them?
If so, then why do you keep saying that there is conserved charge in the absence of measurement? Why don't you say that you don't need to talk about conserved charge in the absence of measurement? It looks as if you use double standards.
 
  • #116
Demystifier said:
the meter as a solid object is stable.
Are you referring only to classical theory? Because this doesn't seem to be a valid assertion in the quantum realm, at least if we go by its theoretical principles. A solid meter is most likely made up of atoms joined by chemical bonds that act as springs with a ground state energy that fluctuates, the corresponding uncertainty in the length of the spring makes the separation between atoms at each step not well defined so that they shouldn't add up to a fixed and stable expected distance between marks on the meter and therefore it can't justify a robust measure remaining stable independently of how and when it is used as a measuring tool.

Of course in practice these shortcomings are overcome by obtaining a measurement that gives a defined distance that allows to introduce an idealized meter and the atomic fluctuations only produce a minor blurring for the position of each atom(for instance in x-ray scattering).
This demonstrates that stability and energy conservation are not directly related to each other.
You would have to show from first principles how the meter is stable taking into account the ground state energy fluctuations.
More formally, consider a particle in a potential of the form
$$V(x,t)=\frac{kx^2}{2}+U(t)$$
where ##U(t)## is a positive non-constant function of time ##t##. Clearly this potential does not conserve energy. Yet, the particle position ##x=0## is stable, provided that ##k## is positive. If ##k## were negative the position ##x=0## would not be stable, even if ##U(t)## were zero. That's another demonstration that stability and energy conservation are not related.
See above.
 
  • #117
Demystifier said:
You said: "Of course between the clicks you have unitary time evolution."
In the same sense you meant that, I ask you: Do I also have unitary time evolution at the time of clicks?If so, then why do you keep saying that there is conserved charge in the absence of measurement? Why don't you say that you don't need to talk about conserved charge in the absence of measurement? It looks as if you use double standards.
Sure, but you cannot evaluate it in practice since the detector is a macroscopic device. All you are interested in is a macroscopic very coarse-grained obsevable (in this case simply "click" or "no click").

The argument with conserved charge was to the question, why a particle is with certainty there. As I argued that's of course the case only for stable particles, and that usually conservation laws forbid its decay. If the particle is unstable, of course it decays with some probability and you cannot with certainty say whether it's still there but only give the probability of its survival. I don't know, why all of this is a "measurement problem". If you just stick to the minimal interpretation, there's never a contradiction. QT seems to be a pretty logically consistent probabilistic description of nature. It's also very successful, i.e., it's tested very well against observations, and the loopholes concerning the possibility of some local deterministic description are more and more closed too. So if you want to get back to a deterministic theory, you'd have to invent something non-local, and that seems to be very difficult, because so far nobody has come up with a convincing model. Maybe Bohmian mechanics is the most convincing, but on the other hand there seem to be predictions of "trajectories" that cannot be verified by experiment.
 
  • #118
vanhees71 said:
So if you want to get back to a deterministic theory, you'd have to invent something non-local
My main objection concerns the claim above. If by "deterministic" you mean the opposite of probabilistic, then, I claim, even without determinism you need something non-local. That's what I am repeatedly trying to explain to you in various ways, and that's what even Ballentine in his book explains in his own way. But somehow you fail to grasp any argument in that direction, because you always and up with an argument of the form: "The QFT dynamics is local" (which is true) "and hence we don't need anything non-local" (which is at least doubtful).
 
  • #119
This I don't understand. Classical electrodynamics (with a classical continuum description for the charged matter) is a local deterministic theory par excellance. Why, in your opnion, do I need to get non-local even in the non-quantum context?

The other argument is related to standard relativistic QFT, which is indeed local and probabilistic. So far we don't need anything non-local, because QFT (even the Standard Model) is very successful in describing all observed facts.

Here we discuss something else, namely possible theories going beyond standard Q(F)T, maybe deterministic ones. In the latter case, imho it's pretty clear that we'd need a non-local formulation if you want to have a deterministic theory that can describe what's described by entanglement in QFT.
 
  • #120
vanhees71 said:
Why, in your opnion, do I need to get non-local even in the non-quantum context?
I didn't say that we need it in non-quantum context. Why do you think I did?

vanhees71 said:
So far we don't need anything non-local, because QFT...
And I disagree. I claim that even QFT has something implicit non-local in it. But I cannot explain it to you without repeating my arguments which you failed to grasp.
 
  • #121
Demystifier said:
. But I cannot explain it to you without repeating my arguments which you failed to grasp.

Lurking, with a suggestion: Since you have mentioned Ballentine explaining this "in his own way," perhaps you could give this cite with enough detail to be looked up? That might provide a way around the explanatory deadlock.
 
  • #122
UsableThought said:
Lurking, with a suggestion: Since you have mentioned Ballentine explaining this "in his own way," perhaps you could give this cite with enough detail to be looked up? That might provide a way around the explanatory deadlock.
Well, I already did it for @vanhees71 in an older thread.
Would studying MWI be a waste of time?
 
  • #123
I have this weird feeling that although QFT is just QM applied to classical field theories, practically we do different things in QFT and QM. Its true that calculating the probability amplitudes of a particular kind of scattering using perturbation theory is something that can be done in both QFT and QM, but when we talk about foundational problems in QM, its not scattering experiments that we're thinking about. In such discussions we tend to think about problems that allow us to think about the state of the system as a whole while at the same time provide us with a clear way to recognize the system as being consisted of subsystems that can be objectively identified. I just have this feeling that there is not much of this nature in the problems that we usually deal with in QFT. Specially because we're always dealing with this perturbation series and Feynman diagrams that somehow are just an incomplete and small part of the solution(Yeah, numerically they may be good approximations, but conceptually they're in no way close to a clear picture of what's going on).
It may stem from my lack of knowledge, but it seems to me that its infinitely harder to talk about foundational problems of QM in the context of QFT. So it just doesn't make sense to me that someone gives the same explanations to dismiss those foundational problems in both theories. It seems to me this is what @vanhees71 is doing. I'm just getting more and more convinced that he just dismisses these problems because he's in the group of physicists who are happy with the fact that they can apply QM to their problems and get accurate enough results(Not that there is anything wrong with this approach).
Sorry if I'm just rambling but I kind of think I have something in my mind but I'm not so sure what
 
  • #124
I think that's the usual confusion between "local interactions" and "long-ranged correlations". The latter are included in rel. QFT in terms of entanglement which can correlate far-distant parts of a quantum system. With "local" I always refer to the properties that the QFT should be microcausal and the Lagrangian a polynomial in the fields and its derivatives at one space-time point (as implemented in the Standard Model).
 
  • #125
ShayanJ said:
I have this weird feeling that although QFT is just QM applied to classical field theories, practically we do different things in QFT and QM. Its true that calculating the probability amplitudes of a particular kind of scattering using perturbation theory is something that can be done in both QFT and QM, but when we talk about foundational problems in QM, its not scattering experiments that we're thinking about. In such discussions we tend to think about problems that allow us to think about the state of the system as a whole while at the same time provide us with a clear way to recognize the system as being consisted of subsystems that can be objectively identified. I just have this feeling that there is not much of this nature in the problems that we usually deal with in QFT. Specially because we're always dealing with this perturbation series and Feynman diagrams that somehow are just an incomplete and small part of the solution(Yeah, numerically they may be good approximations, but conceptually they're in no way close to a clear picture of what's going on).
It may stem from my lack of knowledge, but it seems to me that its infinitely harder to talk about foundational problems of QM in the context of QFT. So it just doesn't make sense to me that someone gives the same explanations to dismiss those foundational problems in both theories. It seems to me this is what @vanhees71 is doing. I'm just getting more and more convinced that he just dismisses these problems because he's in the group of physicists who are happy with the fact that they can apply QM to their problems and get accurate enough results(Not that there is anything wrong with this approach).
Sorry if I'm just rambling but I kind of think I have something in my mind but I'm not so sure what
Well, yes, that's exactly my point of view. I don't see the need for a deterministic theory or any other theory as long as there's no empirical evidence against the theories we have now. I'm very pragmatic in seeing no fundamental problems like "measurement problems", as long as we can use QFT to get all observed facts described by it. A measurement is defined by a real-world measurement apparatus, and there's nothing hinting that one needs more than statistical quantum physics to understand the macroscopic behavior of these measurement apparati in terms of the underlying microphysics. The socalled measurement problem is some quibble of philosophers who have not learned to abandon the classical thinking of what's often called "common sense", but it's just experience of macroscopic matter in everyday life, and as far as the fundamental theories are concerned their classical behavior is well compatible with the underlying quantum dynamics thanks to the coarse-grained nature of macroscopic observables.

Of course, there's the great enigma about a consistent quantization of gravity, and maybe this problem needs an extension of our "foundational toolbox".
 
  • #126
vanhees71 said:
I'm very pragmatic in seeing no fundamental problems like "measurement problems", as long as we can use QFT to get all observed facts described by it.
I find this thread interesting, so I hope that I understand everyone's position correctly. Yours seems pretty straightforward except:
vanhees71 said:
A measurement is defined by a real-world measurement apparatus,
The singular here kind of undermine your previous statement in my view. Because QFT cannot event give any hint about an observed fact. Only a series of facts (whose count is quite fuzzy) will do, already turning the experiment into a "macro" experiment, whatever the apparatus is made of.

vanhees71 said:
and there's nothing hinting that one needs more than statistical quantum physics to understand the macroscopic behavior of these measurement apparati in terms of the underlying microphysics.
But isn't there the obvious hint of entanglement ? Doesn't it prove that there are some single fact (like correlation at some angle) that can be absolutely known/predicted non-statistically ?

Isn't the point of Demystifier to show that there are other hints, like the two spacially separated ends of a stern gerlach experiment kind of certainly knows how to always individually "click" in opposite way (without entanglement needed) ?
 
  • Like
Likes entropy1
  • #127
From many discussions of interpretative problems with @vanhees71 , I would say that his interpretation is some mixture of shut-up-and-calculate interpretation, minimal statistical ensemble interpretation, and instrumental interpretation. Each of these interpretations by itself is legitimate and logically consistent. But neither interpretation is perfect, so people naturally try to mix different interpretations hoping that this will somehow remedy or alleviate deficiencies of individual interpretations. Unfortunately, mixing often makes more harm than good. By mixing different interpretations one easily falls into logical and conceptual inconsistencies, and some people (like me) have not much tolerance for such inconsistencies. Other people don't care much about such inconsistencies (after all, that's mere philosophy), as long as it does not affect their computations of actually measurable quantities.

People who care about something should not talk about it with people who don't care. I should not discuss quantum interpretations with @vanhees71 , but somehow I always fall in the same trap. I always think like this: (1) he is smart and (2) he likes to talk about interpretations, so therefore (3) it must be fruitful to discuss interpretations with him. But that's wrong, (3) does not follow from (1) and (2).
 
  • Like
Likes vanhees71 and UsableThought
  • #128
Demystifier said:
People who care about something should not talk about it with people who don't care.

Absolutely true - hard-won, late-life wisdom, in my case. And also true that most of us who care violate this rule and then regret it, over and over. The illusion of potential fruitfulness that you mention as point #3 persists.

Oddly enough a similar though not identical dilemma applies to real-world negotiations, where two persons (friends or colleagues or partners or spouses or what have you) must divide responsibilities, and as it happens one person cares much more about a particular responsibility than the other. Take for example a married couple, A and B: A cares very much about keeping a clean kitchen, but B is a slob and doesn't care at all. A attempts to put moral pressure on B to agree to do more cleaning up. B nominally agrees so as to keep the peace; but because B really does not care at all, B still doesn't do enough cleaning to satisfy A. Unfortunately, because A really cannot tolerate a dirty kitchen, he/she will then pick up the slack, resenting it every time but powerless to do anything else.

The only way this situation can change is if A possesses superior leverage of another sort and is willing to bring it to bear, even at the risk of alienating B; this superior leverage is called a BATNA, or "best alternative to a negotiated agreement." I learned about BATNAs many decades ago from a book written by a college friend of mine: Kidding Ourselves: Breadwinning, Babies, And Bargaining Power, by Rhona Mahoney. You can read more about BATNAs in a Wikipedia article, link here. A BATNA can be summed up as a feasible option to remaining in a relationship. Whoever has the best BATNA in theory has more leverage - but only if they are ruthless enough to apply it and risk alienating the other person.

But here on PF of course no one has a BATNA. So the only cure for caring too much is to sigh when this is recognized and once again leave off.
 
Last edited:
  • Like
Likes Demystifier
  • #129
Demystifier said:
People who care about something should not talk about it with people who don't care.
Good motto. So just to be sure, do you care about your statement "the meter as a solid object is stable" in quantum physics as it pertains to the measurement problem?
 
  • Like
Likes Demystifier
  • #130
Yes, indeed. I also always fall into the same trap, thinking it might help to keep the discussion away from philosophy and stay closer to physics ;-).
 
  • Like
Likes Demystifier
  • #131
Demystifier said:
People who care about something should not talk about it with people who don't care.
While both parties are willing to discuss, I see no harm in that. planting seed don't guarantee a result. But on the long run, even if fruitless, it may help to guess is the seed is wrong, or the soil, or the weather. As a spectator I found all posts quite interesting ! I am sure I am not the only one...

To get back on physics, I have a hard time understanding when interpretations comes into play here. Maybe if the Q in QFT or QM stands for Quanta (plural) instead of Quantum (singular) then we won't have any grey area to refine.

Have you some link on "instrumental interpretation", so I can see if it en-light the "problem" that all experiments seem to deal with single event while the theory deals with the statistics about a series of those experiments ?
 
  • #132
RockyMarciano said:
Good motto. So just to be sure, do you care about your statement "the meter as a solid object is stable" in quantum physics as it pertains to the measurement problem?
No, I don't care much about that because it looks kind of trivial to me (but I could be wrong). Sorry! :biggrin:
 
  • #133
Hm, as a pragmatist I'd say that the stability of matter, among them "the meter as a solid object" (I guess you are referring to the original metre prototype) is among the prime arguments for the validity of QT, given the atomistic structure of matter. There's no known way to explain this stability within classical physics. Contrary to the socalled "measurement problem", which is a pseudoproblem since QT's predictions are regularly verified in the lab with astonishing accuracy, the problem of stability of matter is a highly non-trivial one solved by QT!
 
  • #134
vanhees71 said:
Hm, as a pragmatist I'd say that the stability of matter, among them "the meter as a solid object" (I guess you are referring to the original metre prototype) is among the prime arguments for the validity of QT, given the atomistic structure of matter. There's no known way to explain this stability within classical physics. Contrary to the socalled "measurement problem", which is a pseudoproblem since QT's predictions are regularly verified in the lab with astonishing accuracy, the problem of stability of matter is a highly non-trivial one solved by QT!
How does QT solve it? I'm not sure what you are referring to by this? Would you take a look to my post #116 and tell me how this is addressed from first principles by QT?
EDIT: I think you are confusing my question with the classical problem of stability of matter(why electrons don't fall in the nuclei and similar stuff), my post was not about that but rather about the reliability of a solid rod as a mesuring tool according to quantum mechanics.
 
Last edited:
  • #135
vanhees71 said:
... Contrary to the socalled "measurement problem", which is a pseudoproblem since QT's predictions are regularly verified in the lab with astonishing accuracy...

I agree with a great deal of what you are saying. However, when reading “Why Decoherence has not Solved the Measurement Problem: A Response to P.W. Anderson” by Stephen L. Adler, I am not thinking that the "measurement problem” is a “pseudoproblem”.
 
  • #136
Demystifier said:
No, I don't care much about that because it looks kind of trivial to me (but I could be wrong). Sorry! :biggrin:
Fair enough. However more often than we think the key to a problem lies in reconsidering what seems trivial.
 
  • Like
Likes Demystifier
  • #137
Lord Jestocost said:
I agree with a great deal of what you are saying. However, when reading “Why Decoherence has not Solved the Measurement Problem: A Response to P.W. Anderson” by Stephen L. Adler, I am not thinking that the "measurement problem” is a “pseudoproblem”.
The difference between problems and pseudo-problems is subjective. Any problem can be turned into a pseudo-problem by a change of perspective. Anyone is welcome to challenge me by a "serious" problem that I will try to turn into a pseudo-problem by a change of perspective, just to prove my point.
 
  • #138
RockyMarciano said:
Fair enough. However more often than we think the key to a problem lies in reconsidering what seems trivial.
That's certainly true. But then again, no single individual can reconsider all problems that seem trivial, so one has to be picky.
 
  • Like
Likes Boing3000 and RockyMarciano
  • #139
Demystifier said:
That's certainly true. But then again, no single individual can reconsider all problems that seem trivial, so one has to be picky.
Maybe you wouldn't mind giving the trivial answer to the objections to a robust solid meter in QM given in #116, then?
 
  • #140
@Demystifier , I have the feeling that, at least in part if not fully, the problem you have is because of overuse of the term "non-local". Have you tried to explain it by giving the exact same argument but using a different word and never mentioning "non-local"?
 
  • Like
Likes Demystifier
  • #141
RockyMarciano said:
Are you referring only to classical theory? Because this doesn't seem to be a valid assertion in the quantum realm, at least if we go by its theoretical principles. A solid meter is most likely made up of atoms joined by chemical bonds that act as springs with a ground state energy that fluctuates, the corresponding uncertainty in the length of the spring makes the separation between atoms at each step not well defined so that they shouldn't add up to a fixed and stable expected distance between marks on the meter and therefore it can't justify a robust measure remaining stable independently of how and when it is used as a measuring tool.

Of course in practice these shortcomings are overcome by obtaining a measurement that gives a defined distance that allows to introduce an idealized meter and the atomic fluctuations only produce a minor blurring for the position of each atom(for instance in x-ray scattering). You would have to show from first principles how the meter is stable taking into account the ground state energy fluctuations.
Fluctuations are not necessarily a threat to stability. Stability does not mean that fluctuations do not exist. Stability means that initial small fluctuation remain small. Indeed, if you write down a periodic wave function for crystal lattice (which can be found in all solid state textbooks) you will see that position uncertainties of atoms are much smaller than the size of the crystal as a whole. This means that quantum fluctuations are small, which corresponds to stability.
 
  • Like
Likes RockyMarciano
  • #142
martinbn said:
@Demystifier , I have the feeling that, at least in part if not fully, the problem you have is because of overuse of the term "non-local". Have you tried to explain it by giving the exact same argument but using a different word and never mentioning "non-local"?
The term "non-local" is standard, but sometimes the term "non-separable" is used instead. Indeed, some physicists hold that QM is non-separable but not non-local. That makes sense because non-locality and non-separability are not the same. In fact, non-separability is a purely technical (i.e. mathematical) concept and there is nothing controversial about the fact that QM is non-separable. But if I was talking only about non-separability, that would mean that I only talk about the mathematical structure of QM and not about its meaning. That would not be satisfying because it is precisely the meaning that I want to talk about.

Perhaps one should invent a new term, different from both non-locality and non-separability? Perhaps! But on the other hand, it could create even more confusion.
 
  • #143
Demystifier said:
Fluctuations are not necessarily a threat to stability. Stability does not mean that fluctuations do not exist. Stability means that initial small fluctuation remain small. Indeed, if you write down a periodic wave function for crystal lattice (which can be found in all solid state textbooks) you will see that position uncertainties of atoms are much smaller than the size of the crystal as a whole. This means that quantum fluctuations are small, which corresponds to stability.
Yes, I referred to this when I wrote about how this is dealt with in practice. But my question was how are these fluctuations kept small, why don't the errors at each atom add up, how are they prevented from cumulating dynamically to a big final error in the absence of conservation of any quantity.
Of course for a periodic wave function for a lattice crystal with a particular pattern it is trivial but I thought you were claiming that measurements are intrinsically aperiodic, and without a conserved pattern and in any case it should not depend on the size of a particular crystal.
 
Last edited:
  • #144
RockyMarciano said:
why don't the errors at each atom add up
Even when errors do add up, it is typical for most statistical systems that errors add up as ##\sqrt{N}##, which is small relative to the number ##N## of the constituents when ##N## is big. Intuitively, the reason why errors do not add up as ##N## is the fact different errors can also cancel each other.
 
  • #145
Demystifier said:
Even when errors do add up, it is typical for most statistical systems that errors add up as ##\sqrt{N}##, which is small relative to the number ##N## of the constituents when ##N## is big. Intuitively, the reason why errors do not add up as ##N## is the fact different errors can also cancel each other.

Hmm, but the error remains the same for any N particles for any size of the crystal, while ##\sqrt{N}## grows as ##N## gets larger. So for statistical systems it may be that error adds up slower than N changes, but for quantum objects it doesn't add up at all. Each atom seems to be aware of the location it ought to be if it were a classical particle, in the absence of uncertainty, and it only experiences its quantum uncertainty in relation to this position. It also seems to be aware of the size of the lattice to adjust its uncertainty to it.
 
  • #146
RockyMarciano said:
Hmm, but the error remains the same for any N particles for any size of the crystal, while ##\sqrt{N}## grows as ##N## gets larger. So for statistical systems it may be that error adds up slower than N changes, but for quantum objects it doesn't add up at all. Each atom seems to be aware of the location it ought to be if it were a classical particle, in the absence of uncertainty, and it only experiences its quantum uncertainty in relation to this position. It also seems to be aware of the size of the lattice to adjust its uncertainty to it.
Indeed, in this case the errors do not add up at all. That's because atoms are not mutually independent. They interact with each other by attractive interaction. The only free quantities are position ##x## and momentum ##p## of the macroscopic body as a whole, and uncertainties of those satisfy
$$\Delta x \Delta p \sim \hbar$$
which does not depend on ##N##.
 
  • #147
Demystifier said:
The term "non-local" is standard, but sometimes the term "non-separable" is used instead. Indeed, some physicists hold that QM is non-separable but not non-local. That makes sense because non-locality and non-separability are not the same. In fact, non-separability is a purely technical (i.e. mathematical) concept and there is nothing controversial about the fact that QM is non-separable. But if I was talking only about non-separability, that would mean that I only talk about the mathematical structure of QM and not about its meaning. That would not be satisfying because it is precisely the meaning that I want to talk about.

Perhaps one should invent a new term, different from both non-locality and non-separability? Perhaps! But on the other hand, it could create even more confusion.
It's very simple: Remember that the Standard Model of particle physics is a local relativistic QFT, and it's clear what local means here. It's (a) microcausality and (b) that the Lagrangian and thus the Hamiltonian is written as a polynomial in local field operators (i.e., operators transforming under the Poincare group as the analogous classical fields) and its (first) space-time derivatives at one spacetime point.

Of course, as any QT also relativistic local QFT admits long-range correlations described by entangled states. Einstein called this "non-separability" in a single-author paper concerning the unfortunately more famous EPR paper which in fact he didn't like, because of exactly this point: His quibble was more about non-separability than anything else, but with this quibble he was wrong since nowadays indeed it's demonstrated with high precision that the strong correlations encoded in entangled states are indeed what's observed.
 
  • #148
vanhees71 said:
It's very simple: Remember that the Standard Model of particle physics is a local relativistic QFT, and it's clear what local means here. It's (a) microcausality and (b) that the Lagrangian and thus the Hamiltonian is written as a polynomial in local field operators (i.e., operators transforming under the Poincare group as the analogous classical fields) and its (first) space-time derivatives at one spacetime point.

Of course, as any QT also relativistic local QFT admits long-range correlations described by entangled states. Einstein called this "non-separability" in a single-author paper concerning the unfortunately more famous EPR paper which in fact he didn't like, because of exactly this point: His quibble was more about non-separability than anything else, but with this quibble he was wrong since nowadays indeed it's demonstrated with high precision that the strong correlations encoded in entangled states are indeed what's observed.
Suppose that we are in the 1920 were theoretical physicists are equipped only with concepts of classical physics, plus relativity, plus "old" Bohr-Sommerfeld-like QM. They don't have modern quantum mechanics, they don't have quantum field theory and they don't have wave functions. And suppose that some lucky experimentalist observes "quantum" correlations by accident, but he nor anybody else knows about their quantum theoretical origin. In your opinion, how would physicists of that time interpret such correlations? Do you think they would conclude that there is some non-local mechanism involved? Or do you think that a different interpretation would look more natural? Do you think that some smart guy could reproduce the laws of modern QM just from this experiment (without Heisenberg's and Schrodinger's insights that are about to appear 5 years later)?
 
Last edited:
  • #149
Demystifier said:
Indeed, in this case the errors do not add up at all. That's because atoms are not mutually independent. They interact with each other by attractive interaction. The only free quantities are position ##x## and momentum ##p## of the macroscopic body as a whole, and uncertainties of those satisfy
$$\Delta x \Delta p \sim \hbar$$
which does not depend on ##N##.
So your answer to my question is interactions. That much we know, how about elaborating on how do quantum interactions(that are themselves subject to the HUP) achieve macroscopic objects as a whole satisfying ##\Delta x \Delta p \sim \hbar##, this is after all the core of the "measurement problem".
 
  • #150
RockyMarciano said:
how about elaborating on how do quantum interactions(that are themselves subject to the HUP) achieve macroscopic objects as a whole satisfying ##\Delta x \Delta p \sim \hbar##, this is after all the core of the "measurement problem".
If you ask me why macro objects look classical, probably the best answer is decoherence. See e.g. the book by Schlosshauer
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

If you think that decoherence cannot be the full answer, then try a Bohmian completion:
https://arxiv.org/abs/quant-ph/0112005
 

Similar threads

Back
Top