Quantum incompleteness?

9,251
2,143
It is in those correspondence rules that the problem arises, and depending on how one interprets the Born rule, for instance, you might have a smaller or bigger problem.
I cant follow you there. I know Ballentine pretty well and what he shows is based on the invarience of probabilities the dynamics follows.

I would say the Born rule was devised having the particle picture of classical mechanics in mind. Don't you?
What do you mean by devised? Historically - probably - but so? We now know it follows from much more general considerations having nothing to do with particles eg Gleason's theorem.

Thanks
Bill
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,573
Well if you reduce the discussion to abstract observables without attributing them to any particular object be it a particle, a field or whatever, you don't have a way to connect it with the physical/pragmatic side so no measurement problem for you, but as Ballentine warned in the quote posted by devil then you don't really have a physical theory but just a set of abstract axioms without connection with experiment.
I think that's what I intended to say: that the "measurement problem" is about the connection between the notion of "observable" that is a primitive in the quantum theory, and the "observable" that is something that requires a measurement apparatus and a measurement procedure. But I don't see how that supports the claim that the measurement problem has anything to do, intrinsically, with classical properties of particles.

I would say the Born rule was devised having the particle picture of classical mechanics in mind. Don't you?
No, I don't see much of a connection between the two. The Born rule about probabilities, or something like it, is forced on us by the assumption, or empirical fact, that an observation always produces an eigenvalue of the corresponding operator, and that operators don't commute (so it's not possible for all observables to have definite values simultaneously). I don't see that there is anything particularly particle-like about any of this.

When you were talking about observables were you thinking about them in terms of properties of particles, fields...?
They are properties of a system, as a whole. The electric and magnetic field at a point in space is an observable. The mass, position, momentum, magnetic moment of a lump of iron are all observables. Yes, those observables are all macroscopic sums of observables associated with individual atoms of the iron, but the observables don't require particles to make sense of them. So I really don't understand the point you are making about the relationship between observables and particles.
 

Jano L.

Gold Member
1,333
74
No. Kolgmorgrov's axioms are clear on this point:
http://en.wikipedia.org/wiki/Probability_axioms
'This is the assumption of unit measure: that the probability that some elementary event in the entire sample space will occur is 1. More specifically, there are no elementary events outside the sample space.'

If something has probability 1 it must occur.
I think it will be easier to explain this by example. Consider random process that produces series of red points somewhere in a unit disk with uniform probability density. The probability of the event that the next point will concide with any point A of the disk is equal to 0.

However, after the event occurs, some point of the disk will be red. At that instant, an event with probability 0 has happened.

Actually, all events that happen in such random process are events that have probability 0.

So "event has probability 0" does not mean "impossible event".

Similarly, "probability 1" does not mean "certain event". Consider probability that the red point will land at point with both coordinates irrational. This can be shown to be equal to 1 in standard measure theory. However, there is still infinity of points that have rational coordinates, and these can happen - they are part of the disk.

In the language of abstract theory, all this is just a manifestation of the fact that equal measures do not imply that the sets are equal.
 

Jano L.

Gold Member
1,333
74
I would say the Born rule was devised having the particle picture of classical mechanics in mind. Don't you?
Very good point. As far as know, there are actually two Born rules, although people tend to think they are the same.

The first rule, well working in scattering and quantum chemistry, is the assumption that

$$
|\psi(\mathbf r)|^2 ~\Delta V,
$$

gives probability that the particle is in the small volume element ##\Delta V## around ##\mathbf r##.

This really refers to particles and their configuration.

The second rule, I think proposed after the first one, is that

$$
p_k = |\langle \phi_k|\psi \rangle|^2
$$

gives the probability that the system in state ##\psi## will manifest energy ##E_k## (or get into state ##\phi_k## in other versions) when "measurement of energy" is performed (or even spontaneously, due to interaction with environment in other versions). This is more abstract and does not require particles.

We should really distinguish these two rules. The first one is easy and does not depend on the measurement problem, and is gauge-invariant.

The second is difficult to understand, because it is connected to measurements and is gauge-dependent - if we choose different gauge to calculate ##\psi##, we get different ##p_k##.
 
1,545
10
No. Kolgmorgrov's axioms are clear on this point:
http://en.wikipedia.org/wiki/Probability_axioms
'This is the assumption of unit measure: that the probability that some elementary event in the entire sample space will occur is 1. More specifically, there are no elementary events outside the sample space.'

If something has probability 1 it must occur.
It's true that if an event will definitely occur, then it must have probability 1. But it's not the case that if an event has probability 1, it will definitely occur. See this wikipedia page.
 
3,506
26
I think it will be easier to explain this by example. Consider random process that produces series of red points somewhere in a unit disk with uniform probability density. The probability of the event that the next point will concide with any point A of the disk is equal to 0.

However, after the event occurs, some point of the disk will be red. At that instant, an event with probability 0 has happened.

Actually, all events that happen in such random process are events that have probability 0.

So "event has probability 0" does not mean "impossible event".

Similarly, "probability 1" does not mean "certain event". Consider probability that the red point will land at point with both coordinates irrational. This can be shown to be equal to 1 in standard measure theory. However, there is still infinity of points that have rational coordinates, and these can happen - they are part of the disk.

In the language of abstract theory, all this is just a manifestation of the fact that equal measures do not imply that the sets are equal.
It's true that if an event will definitely occur, then it must have probability 1. But it's not the case that if an event has probability 1, it will definitely occur. See this wikipedia page.
Good points that simply go to support Jano L. posts #71, #74, #78... IMO showing that Bill's reliance on Gleason's theorem can not be used in the general case for what he thinks it can, but only for discretized, lattice models of physical systems, a very strong assumption in the light of what we know, or at least I think most physicists still favor a continuous picture of nature as exemplified by successful theories like GR.
What do you mean by devised? Historically - probably - but so?
Probably no, certainly, you just have to read Born's original 1926 paper.


We now know it follows from much more general considerations having nothing to do with particles eg Gleason's theorem.
I wouldn't be so sure we know that. See above.
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,573
I think it will be easier to explain this by example. Consider random process that produces series of red points somewhere in a unit disk with uniform probability density. The probability of the event that the next point will concide with any point A of the disk is equal to 0.

However, after the event occurs, some point of the disk will be red. At that instant, an event with probability 0 has happened.

Actually, all events that happen in such random process are events that have probability 0.

So "event has probability 0" does not mean "impossible event".
That's certainly true, mathematically. On the other hand, in the real world, we never measure real-valued observables to infinite precision. We never really observe: "The particle's momentum is P", we observe something like "The particle's momentum is somewhere in the range [itex]P\ \frac{+}{-} \ \Delta P[/itex]. For this reason, if we have two states [itex]| \psi \rangle [/itex] and [itex]|\phi \rangle[/itex] such that [itex]\langle \psi | \phi \rangle = 1[/itex], they are considered the same state, as far as quantum mechanics is concerned. Adding or subtracting a set of measure zero does nothing.
 
463
7
Last edited by a moderator:

Jano L.

Gold Member
1,333
74
That's certainly true, mathematically. On the other hand, in the real world, we never measure real-valued observables to infinite precision. We never really observe: "The particle's momentum is P", we observe something like "The particle's momentum is somewhere in the range P +− ΔP.
Yes, but we really discussed theoretical difference between probabilistic and deterministic description. I think the limitations of observations have no bearing on that argument.

...if we have two states |ψ⟩ and |ϕ⟩ such that ⟨ψ|ϕ⟩=1, they are considered the same state, as far as quantum mechanics is concerned. Adding or subtracting a set of measure zero does nothing.
It does nothing to probability. That was my point - in probabilistic theory, the probability is 1 both for certain and almost certain event. We cannot adequately describe the difference between the two in such a theory. Ergo deterministic theory is not just a special case of probabilistic theory. They are different kinds of theories constructed for different purposes.
 

DevilsAvocado

Gold Member
751
90
The electron in orbit was given as an example of a possible path. I have read about this argument before. That an classically orbiting electron should emit radiation presumably because it is a charged object and an accelerated charged object (changing direction) will emit electromagnetic radiation. Yet if you were to put opposite charge on two spheres, one light and one heavy, and then set the lighter in orbit (in space) around the heavier would the two spheres not act just as the two body problem for gravitational force.
... but electromagnetism is 10^39 times stronger than gravitation ...

I think the Bohr model is pretty dead, there are incompatibilities to empirical spectral lines, and it also violates the uncertainty principle, and even if you magically could fix all that – where is your single localized particle in the double-slit experiment?

It doesn’t work...
 

DevilsAvocado

Gold Member
751
90
This is in fact the defining property of an inertial frame - the earth isn't exactly inertial but for many practical purposes such as this experiment it is.
I’m sorry bhobba, I’m completely lost... are you saying that the inertial frame of earth has anything to do with the double-slit experiment??
 

DevilsAvocado

Gold Member
751
90
QM does not insist on analogies with a classical particle model. All it assumes is position is an observable - which is a fact.
In QM the symmetries are in the quantum state and observables - in classical mechanics its in the Lagrangian.
I don't see that, at all. To me, the "measurement problem" is the conceptual difficulty that on the one hand, a measurement has an abstract role in the axioms of quantum mechanics, as obtaining an eigenvalue of a self-adjoint linear operator, and it has a physical/pragmatic/empirical role in actual experiments as a procedure performed using equipment. What is the relationship between these two notions of measurement? The axioms of quantum mechanics don't make it clear.

I don't see that it has anything particularly to do with particles.
Well if you reduce the discussion to abstract observables without attributing them to any particular object be it a particle, a field or whatever, you don't have a way to connect it with the physical/pragmatic side so no measurement problem for you, but as Ballentine warned in the quote posted by devil then you don't really have a physical theory but just a set of abstract axioms without connection with experiment.
I think that's what I intended to say: that the "measurement problem" is about the connection between the notion of "observable" that is a primitive in the quantum theory, and the "observable" that is something that requires a measurement apparatus and a measurement procedure. But I don't see how that supports the claim that the measurement problem has anything to do, intrinsically, with classical properties of particles.
We should really distinguish these two rules. The first one is easy and does not depend on the measurement problem, and is gauge-invariant.

The second is difficult to understand, because it is connected to measurements and is gauge-dependent - if we choose different gauge to calculate ##\psi##, we get different ##p_k##.

Guys, it’s very interesting to read this discussion, and this stuff is always hard to talk about. Still, let me give you something to chew on while the ‘battle’ continues. :wink:

http://arxiv.org/abs/0707.0401
J.S. Bell's Concept of Local Causality said:
“The beables of the theory are those elements which might correspond to elements of reality, to things which exist. Their existence does not depend on ‘observation’. Indeed observation and observers must be made out of beables.”

Or as he explains elsewhere,

“The concept of ‘observable’ .... is a rather woolly concept. It is not easy to identify precisely which physical processes are to be given the status of ‘observations’ and which are to be relegated to the limbo between one observation and another. So it could be hoped that some increase in precision might be possible by concentration on the beables ... because they are there.”

Bell’s reservations here (about the concept “observable” appearing in the fundamental formulation of allegedly fundamental theories) are closely related to the so-called “measurement problem” of orthodox quantum mechanics, which Bell encapsulated by remarking that the orthodox theory is “unprofessionally vague and ambiguous” in so far as its fundamental dynamics is expressed in terms of “words which, however legitimate and necessary in application, have no place in a formulation with any pretension to physical precision” – such words as “system, apparatus, environment, microscopic, macroscopic, reversible, irreversible, observable, information, measurement.” As Bell elaborates,

“The concepts ‘system’, ‘apparatus’, ‘environment’, immediately imply an artificial division of the world, and an intention to neglect, or take only schematic account of, the interaction across the split. The notions of ‘microscopic’ and ‘macroscopic’ defy precise definition. So also do the notions of ‘reversible’ and ‘irreversible’. Einstein said that it is theory which decides what is ‘observable’. I think he was right – ‘observable’ is a complicated and theory-laden business. Then the notion should not appear in the formulation of fundamental theory.”

As Bell points out, even Bohr (a convenient personification of skepticism regarding the physical reality of unobservable microscopic phenomena) recognizes certain things (for example, the directly perceivable states of a classical measuring apparatus) as unambiguously real, i.e., as beables.

[...]

The unprofessional vagueness and ambiguity of orthodox quantum theory, then, is related to the fact that its formulation presupposes these (classical, macroscopic) beables, but fails to provide clear mathematical laws to describe them. As Bell explains,

“The kinematics of the world, in [the] orthodox picture, is given by a wavefunction ... for the quantum part, and classical variables – variables which have values – for the classical part... [with the classical variables being] somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and ‘collapse’ recipes for their interaction.”

There are thus two related problems. First, the posited ontology is rather different on the two sides of (what Bell calls) “the shifty split” – that is, the division between “the quantum part” and “the classical part.” But then, as a whole, the posited ontology remains unavoidably vague so long as the split remains shifty – i.e., so long as the dividing line between the macroscopic and microscopic remains undefined. And second, the interaction across the split is problematic. Not only is the account of this dynamics (the “collapse” process) inherently bound up in concepts from Bell’s list of dubious terms, but the very existence of a special dynamics for the interaction seems to imply inconsistencies with the dynamics already posited for the two realms separately. As Bell summarizes,

“I think there are professional problems [with quantum mechanics]. That is to say, I’m a professional theoretical physicist and I would like to make a clean theory. And when I look at quantum mechanics I see that it’s a dirty theory. The formulations of quantum mechanics that you find in the books involve dividing the world into an observer and an observed, and you are not told where that division comes... So you have a theory which is fundamentally ambiguous...”

The point of all this is to clarify the sort of theory Bell had in mind as satisfying the relevant standards of professionalism in physics.
Don’t know why I love this paper, but I do – it’s ‘crisp & clear’...
 
9,251
2,143
I’m sorry bhobba, I’m completely lost... are you saying that the inertial frame of earth has anything to do with the double-slit experiment??
It has nothing to do with it per se.

My comment was in relation to the claim the measurement problem had something to do with QM holding the particle picture as fundamental. QM doesn't do that - the dynamics are, just like Classical Mechanics, determined by symmetry arguments. There is no particle assumption other than position is an observable which is an experimentally verified fact.

For many practical purposes the earth can be considered to have these symmetry properties - that was my point.

Thanks
Bill
 
9,251
2,143
Consider probability that the red point will land at point with both coordinates irrational.
Well since the rationals have Lebesque measure zero and there is no way to observationally tell the difference between a rational and rational point, since that would require an infinite measurement precision, it's not a well defined problem physically.

If you seriously doubt a probability of 1 does not mean a dead cert then I think this thread is not the appropriate place to discuss it. I think the Set Theory, Logic, Probability and Statistics statistics subforum is more appropriate so I will be doing a post there.

Thanks
Bill
 
9,251
2,143
Gleason's theorem can not be used in the general case for what he thinks it can, but only for discretized, lattice models of physical systems, a very strong assumption in the light of what we know, or at least I think most physicists still favor a continuous picture of nature as exemplified by successful theories like GR.
Gleason's theorem holds for infinite dimensional Hilbert spaces:
http://kof.physto.se/theses/helena-master.pdf

I have zero idea why you would think otherwise.

It even holds for non-separable spaces - not that that is of any value to QM.

The issue with Gleason's theorem is its physical basis is a bit unclear - mathematically whats going on is well understood, it depends on non contextuality, and, again mathematically, contextuality is a bit of an ugly kludge, but exactly, from a physical point of view why you require it is open to debate. This is the exact out Bohmian Mechanics uses and its a valid theory. But the Hilbert space formalism is ugly if you don't assume it - you cant define a unique probability measure so the question is - what use is using a Hilbert space to begin with - and indeed for BM the usual formulation is secondary in that interpretation.

My point is Born's rule is not dependent on a particle model - its basis is non-contextually in the usual formulation, or specific assumptions in other formulations like BM.

Thanks
Bill
 
9,251
2,143
Adding or subtracting a set of measure zero does nothing.
Exactly. This is bog standard stuff from more advanced probability texts that take a rigorous approach. Finding probabilities associated with determining rational or irrational numbers is not a well defined problem since the rationals have Lebesque measure zero.

I think a discussion on exactly what probability 0 and 1 means is best dome on the probability subforum and I will do a post there.

Thanks
Bill
 
3,506
26
Guys, it’s very interesting to read this discussion, and this stuff is always hard to talk about. Still, let me give you something to chew on while the ‘battle’ continues. :wink:

http://arxiv.org/abs/0707.0401


Don’t know why I love this paper, but I do – it’s ‘crisp & clear’...
Because it brings us Bell in his deep and intelligent own words, contrary to the tradition of misinterpreting him that abounds in QM literature :devil:.
:smile:
 
3,506
26
The issue with Gleason's theorem is its physical basis is a bit unclear - mathematically whats going on is well understood, it depends on non contextuality, and, again mathematically, contextuality is a bit of an ugly kludge, but exactly, from a physical point of view why you require it is open to debate. This is the exact out Bohmian Mechanics uses and its a valid theory. But the Hilbert space formalism is ugly if you don't assume it - you cant define a unique probability measure so the question is - what use is using a Hilbert space to begin with - and indeed for BM the usual formulation is secondary in that interpretation.

My point is Born's rule is not dependent on a particle model - its basis is non-contextually in the usual formulation, or specific assumptions in other formulations like BM.

Thanks
Bill
Bill, I agree with the quoted part.
Non-contextuality is a strong assumption IMO. But yes it makes the Hilbert formalism "ugly" not to adopt it. But Gleason's theorem assumes non-contextuality and that was the sense of my comment about lack of generality of the theorem as there are QM interpretations that don't assume non-contextuality (you mentioned BM but there are also the modal interpretations and others).
I have a doubt about this because I've seen quantum non-contextuality defined in two ways that I guess are equivalent, maybe you can help me connect them: as referred to independence of the measurement arrangement and as basis independence of the probability assigned to a vector.
 

Jano L.

Gold Member
1,333
74
I think a discussion on exactly what probability 0 and 1 means is best dome on the probability subforum and I will do a post there.
I am looking forward to it. However, the argument was about something different: that deterministic theory is a special kind of probabilistic theory. I am quite interested what others think about this.
 
463
7
Bill, I agree with the quoted part.
Non-contextuality is a strong assumption IMO. But yes it makes the Hilbert formalism "ugly" not to adopt it. But Gleason's theorem assumes non-contextuality and that was the sense of my comment about lack of generality of the theorem as there are QM interpretations that don't assume non-contextuality (you mentioned BM but there are also the modal interpretations and others).
I have a doubt about this because I've seen quantum non-contextuality defined in two ways that I guess are equivalent, maybe you can help me connect them: as referred to independence of the measurement arrangement and as basis independence of the probability assigned to a vector.

http://arxiv.org/pdf/1207.1952v1.pdf

..."The concept of contextuality states that the outcomes of measurement may depend on what measurements are performed alongside"...
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,573
Bill, I agree with the quoted part.
Non-contextuality is a strong assumption IMO. But yes it makes the Hilbert formalism "ugly" not to adopt it. But Gleason's theorem assumes non-contextuality and that was the sense of my comment about lack of generality of the theorem as there are QM interpretations that don't assume non-contextuality (you mentioned BM but there are also the modal interpretations and others).
I did a Google search on the phrase "non-contextuality" and although it gets many hits, I still don't really understand what it means. Can someone give a real definition, and briefly explain why it's relevant in interpretations of quantum mechanics?
 
463
7
I did a Google search on the phrase "non-contextuality" and although it gets many hits, I still don't really understand what it means. Can someone give a real definition, and briefly explain why it's relevant in interpretations of quantum mechanics?
conjugates values that does not depend on the context, that is independent of its antecedents.

http://plato.stanford.edu/entries/kochen-specker/


https://www.physicsforums.com/showpost.php?p=4401195&postcount=12

really, you have to define locality/non-locality just like subsets of contextuality/noncontextuality,
contextuality is broader, subsumes locality/nonlocality.
that every state that is contextual with respect to the defined test of contextuality is nonlocal as per the CHSH (Clauser, Horne, Shimony, Holt test) but the converse is not true, or as i like to ask:

Every state that is contextual is nonlocal.
...and the inverse, is every state that is nonlocal is contextual ?


-----
measured values (atributtes, characteristics, properties) in context, just is related to that, "context", be real goes beyond properties, the possibility of values requires pre-existent objects or process, without objects, there is no possibility of properties (values).
 

stevendaryl

Staff Emeritus
Science Advisor
Insights Author
8,400
2,573
conjugates values that does not depend on the context, that is independent of its antecedents.

http://plato.stanford.edu/entries/kochen-specker/


https://www.physicsforums.com/showpost.php?p=4401195&postcount=12
That leaves me a little puzzled, still. Maybe somebody can give examples of a toy model that is contextual?

What I do understand is this: The argument that it is impossible for a hidden-variables theory that reproduces the predictions of quantum mechanics in an EPR-type twin particle experiment assumes that the hidden variable is unaffected by choices made at the measuring devices. But such an effect is ruled out by locality, so I don't see why "contextuality" matters in that case.
 

DrChinese

Science Advisor
Gold Member
7,194
1,016
That leaves me a little puzzled, still. Maybe somebody can give examples of a toy model that is contextual?

What I do understand is this: The argument that it is impossible for a hidden-variables theory that reproduces the predictions of quantum mechanics in an EPR-type twin particle experiment assumes that the hidden variable is unaffected by choices made at the measuring devices. But such an effect is ruled out by locality, so I don't see why "contextuality" matters in that case.
An example of a contextual model can be seen if a context is considered to reside in the future. If Alice and Bob can signal from the future to the past as to what they are planning to measure, then entangled state correlations are easier to explain. Nothing needs to propagate faster than c for such mechanism to operate (and to properly describe any existing experiments).

So here we have locality respected while non-contextuality is not, which is essentially what you are looking for. Most contextual models seem "strange" as in counter-intuitive. I personally don't see them as any stranger than non-local ones.
 

DevilsAvocado

Gold Member
751
90
That leaves me a little puzzled, still. Maybe somebody can give examples of a toy model that is contextual?
Spekkens Toy Model take the epistemic view and has local and non-contextual variables (= fails to reproduce violations of Bell inequalities), here’s a short introduction and here’s the arXiv paper.

Jan-Åke Larsson has made a contextual extension of this toy model:

http://arxiv.org/abs/1111.3561
A contextual extension of Spekkens' toy model said:
Quantum systems show contextuality. More precisely, it is impossible to reproduce the quantum-mechanical predictions using a non-contextual realist model, i.e., a model where the outcome of one measurement is independent of the choice of compatible measurements performed in the measurement context. There has been several attempts to quantify the amount of contextuality for specific quantum systems, for example, in the number of rays needed in a KS proof, or the number of terms in certain inequalities, or in the violation, noise sensitivity, and other measures. This paper is about another approach: to use a simple contextual model that reproduces the quantum-mechanical contextual behaviour, but not necessarily all quantum predictions. The amount of contextuality can then be quantified in terms of additional resources needed as compared with a similar model without contextuality. In this case the contextual model needs to keep track of the context used, so the appropriate measure would be memory. Another way to view this is as a memory requirement to be able to reproduce quantum contextuality in a realist model. The model we will use can be viewed as an extension of Spekkens' toy model [Phys. Rev. A 75, 032110 (2007)], and the relation is studied in some detail. To reproduce the quantum predictions for the Peres-Mermin square, the memory requirement is more than one bit in addition to the memory used for the individual outcomes in the corresponding noncontextual model.
What I do understand is this: The argument that it is impossible for a hidden-variables theory that reproduces the predictions of quantum mechanics in an EPR-type twin particle experiment assumes that the hidden variable is unaffected by choices made at the measuring devices. But such an effect is ruled out by locality, so I don't see why "contextuality" matters in that case.
I agree, I always thought that contextuality means that the entire measurement setup has to be taken in consideration for the context of outcome, i.e. if Alice put her polarizer orthogonal to Bob this will have an effect on the outcome of photon B = non-locality...
 
Last edited:

Related Threads for: Quantum incompleteness?

Replies
2
Views
725
Replies
118
Views
7K
  • Last Post
2
Replies
43
Views
3K
  • Last Post
Replies
4
Views
3K
  • Last Post
Replies
6
Views
856
  • Last Post
Replies
6
Views
1K
Replies
14
Views
2K

Hot Threads

Top