# Anti-realist Interpretations of QM

• I
Gold Member
Except that there is evidence that there is no such model. What you are describing is a non-contextual model. Which is one which is observation independent. You could say MWI is such a model, I guess, although all possibilities are realized with that rather than none. Not sure that actually solves your quandary.

Past that, pretty much everything is contextual. Which is precisely the opposite - the property & value only come into focus when the measurement is performed.
Thanks for that clarification. I have come across the tern contextual in the context of QM and had a rough understanding of it, but this has given me a clearer understanding of the term.

If there is a quantum system prior to measurement but our models do not or cannot describe it prior to measurement, does that mean that our models are necessarily incomplete? Incomplete models of the system/nature, I mean.

Paul Colby
Gold Member
I think, perhaps, thinking of reality in terms of macroscopic objects can be somewhat of a hindrance. It is entirely natural and almost impossible not to do, but the idea that I have in mind tries to get away from that.
I think this is the same mistake so many people make. Independent of my opinion, you will need much more than philosophy to resolve such a problem.

Lynch101
DrChinese
Science Advisor
Gold Member
If there is a quantum system prior to measurement but our models do not or cannot describe it prior to measurement, does that mean that our models are necessarily incomplete? Incomplete models of the system/nature, I mean.
No one is questioning that the quantum system exists, let's agree on this point. The moon is there when you are not looking at it. What's missing - to continue the analogy - is whether the moon is yellow (thinking of this as an observable property) when it is not observed. Of course the moon is macroscopic, so I don't really believe that it is not yellow when scientists don't look at it.

Is the model complete? Well sure it is. That is, that which the model attempts to map is complete. But no model is "true". Models are useful tools. Better models normally require more inputs to prove better output descriptions. In the case of Bohmian Mechanics, as Demystifier will tell you, he is missing key input variables to enable him to predict the outcome of a quantum measurement in advance.

In the case of the antirealist position, there is no such missing variable. A system in a superposition of states lacks any local property whatsoever that would predetermine the outcome of a quantum measurement of that superposition.

So our quantum model is complete as is. Can a new model be created that relies on more information?

a) The answer is YES if you look for the added information non-locally. Bohmians follow this line. However, their improved model cannot operate simply because that added information they need is NOT available, even in principle.

b) The answer is NO if you are an antirealist. The current model is the best there is precisely because there are no additional variables to tap.

Again, the interpretations attempt to address this question - with varying results that are not fully satisfying* to anyone.

*To be fully satisfying, an interpretation would need to be convincing to most other scientists. That probably means it would need to be falsifiable by experiment, and then said experiment performed with a favorable outcome. 90 years later... good luck with that!

Lynch101
Gold Member
No one is questioning that the quantum system exists, let's agree on this point. The moon is there when you are not looking at it. What's missing - to continue the analogy - is whether the moon is yellow (thinking of this as an observable property) when it is not observed. Of course the moon is macroscopic, so I don't really believe that it is not yellow when scientists don't look at it.
I think we might still be talking at cross purposes a little. I'll try to pin point what I see as the issue.

Initially, I was of the [mis]understanding that the anti-realist theory said, to stick with the analogy, that the moon was not there when no one is looking at it. I understand now that was a misinterpretation on my part. There seems to be agreement that the moon exists prior to measurement.

Is the model complete? Well sure it is. That is, that which the model attempts to map is complete.
This is where I think we are talking at cross purposes, the sense in which the model is complete.

@Morbert clarified a couple of points that I had been misunderstanding, when he said.
Quantum theory in a strict sense is nothing more than the set of rules whereby physicists compute probabilities of the outcomes of macroscopic tests.
anti-realist is agnostic towards the nature of the reality of quantum systems, and rejects physical properties in our models as real
Following on with the analogy, there might be agreement that the moon is there whether we look at it or not, but the above seems to say that our model only describes the part of the moon we can see [from Earth, let's say]. If the purpose of the model is to only map the part of the moon that we can see, then it is complete in what it sets out to do.

We can, however, distinguish a complete model of the part of the moon we can see from a complete model of the moon itself, and there seems to be agreement there that there is a part of the moon that we cannot see. It is in this sense, that I am reasoning that our model is not a complete model of nature.

I thought this was fundamentally what Einstein was driving at when he made his statement in relation to the moon. I know the EPR paper was based on certain classical preconceptions which Bell subsequently ruled out, but I feel like there was a more fundamental point as demonstrated by saying that the moon is there whether we look at it or not.

Now, it might be the case that it simply isn't possible, even in principle, to develop a complete model of the moon, but that would just mean that we can never have a completed model of nature.

Gold Member
I think this is the same mistake so many people make. Independent of my opinion, you will need much more than philosophy to resolve such a problem.
Is there a consensus [among anti-realists] that it is not possible, even in principle, to distinguish between the various interpretations on the basis of experiment? If so, would that not mean that philosophy is the only way to probe the question further? I'm not necessarily saying to resolve it, because it philsophy might not be able to resolve it, but it might be possible to draw certain conclusions or consequences which help to frame the different interpretations.

PeterDonis
Mentor
2019 Award
Is there a consensus [among anti-realists] that it is not possible, even in principle, to distinguish between the various interpretations on the basis of experiment?
This is what "interpretations" means. If you have something that makes different experimental predictions from standard QM, it isn't an interpretation of QM, it's a different theory.

If so, would that not mean that philosophy is the only way to probe the question further?
No, it means you can't probe the question further at all unless and until you find some different theory from standard QM that can be tested experimentally against it.

Lynch101
PeterDonis
Mentor
2019 Award
it might be possible to draw certain conclusions or consequences which help to frame the different interpretations
If this were going to be helpful, one would expect that the voluminous literature over the past century or so that has attempted to do this would have had some effect. But it doesn't appear that it has. Everything that has happened up to now indicates that a priori reasoning in the absence of any possible experimental test simply doesn't help us humans to find better theories or better interpretations of theories.

Lynch101
DrChinese
Science Advisor
Gold Member
Now, it might be the case that it simply isn't possible, even in principle, to develop a complete model of the moon, but that would just mean that we can never have a completed model of nature.
As mentioned, there is nothing incomplete in the current model(s).

EPR (1935) postulated a more complete specification of the system was possible, since it is possible to predict in advance the outcome of certain quantum measurements (allegedly) without disturbing that system. They assumed if you could predict any property's value and predict it in advance, then they must all exist in advance. This position was soundly rejected by most, but it remained feasible until Bell and Aspect. Now we know better.

I think if you worked through a Bell test example, you would understand better why my statement above generally matches the position of most physicists. Because we only have measurements to tell us anything about what is going on; and we know the statistical rules for the results of those measurements; and they cannot be modeled by having values prior to measurement due to Bell: we have enough to conclude that anything left to add is just a philosophical or interpretational point. Even an interpretation does not purport that there is more; they merely purport to explain a mechanism whereby the measurement results in a value. (That usually being the measurement of an entangled system, which is why I suggest working through the math of Bell Test.)

Try imagining pairs of hypothetical entangled photon pairs, and how they end up with values that match the statistical predictions of QM. Consider examples at 0 degrees difference, and 120 degrees difference, but otherwise at all different angles. You will quickly see that to be statistically correct, you MUST know the measurement setting(s) in advance. That is exactly the requirement of ANY contextual model. But it is NOT allowed in any non-contextual model.

Lynch101
DrChinese
Science Advisor
Gold Member
@Lynch101

Putting that a little differently: Nothing BEFORE the measurement is missing or incomplete. Possibilities for completion exist at the time of measurement. Different interpretations explain to varying degrees how that happens, but the additional input variables are in the form of the complete context. Which includes the measurement settings. There still appears to be random elements which appear in the equation, the origin of which (again) is explained in varying degrees by each interpretation.

There essentially aren't any interpretations filling in the blanks BEFORE a measurement occurs. That this is the case is one of the strange attributes of QM. But there is definitely no basis for insisting there is something missing, other than to simply adopt that position and not move off of it. Which is what Einstein did (no criticism intended, he did not have the benefit of what we know today).

Paul Colby
RUTA
Science Advisor
Try imagining pairs of hypothetical entangled photon pairs, and how they end up with values that match the statistical predictions of QM. Consider examples at 0 degrees difference, and 120 degrees difference, but otherwise at all different angles. You will quickly see that to be statistically correct, you MUST know the measurement setting(s) in advance. That is exactly the requirement of ANY contextual model. But it is NOT allowed in any non-contextual model.
Here is an interesting paper on the contextuality inherent in QM that is valid for retrocausation or
non-local constraints, which are not handled by Kochen-Specker type no-go theorems. They write:
The rationale in both cases is that non-contextuality could emerge naturally in such models: physical properties might well be “real” and “counterfactually definite”, but depend on future or distant measurements because of some physically motivated—although radically novel—causal influence. Such proposals do not fit neatly within the classical causal modelling framework, and so are not ruled out by recent work in this direction [9,22], nor by any of the existing no-go theorems.

In this paper, we characterise a new ontological models framework to prove that even if one allows for arbitrary causal structure, ontological models of quantum experiments are necessarily contextual. Crucially, what is contextual is not just the traditional notion of “state”, but any supposedly objective feature of the theory, such as a dynamical law or boundary condition. Our finding suggests that any model that posits unusual causal relations in the hope of saving “reality” will necessarily be contextual.

RUTA
Science Advisor
In an earlier post, someone classified Relational Blockworld (RBW) as anti-realist. We are not realists about the wavefunction, but we classify RBW as realist because there simply are no hidden "quantum entities" to be a realist about. What we have access to (classical reality) is certainly "real" and there are no other ontological entities in the RBW interpretation. That does not entail instrumentalism either :-) Here is our latest paper whereby physics is understood to be the study of certain constraints on experience. You only need to read Section 3 "Neutral Monism and the Axioms of Physics" to see what I mean. If you're interested in how it relates to entanglement, you can read Section 4 "The Axioms Reveal QM’s Completeness and Coherence." If you're interested in how it explains delayed-choice experiments, you can read Section 5 "QM and Experience."

Gold Member
EPR (1935) postulated a more complete specification of the system was possible, since it is possible to predict in advance the outcome of certain quantum measurements (allegedly) without disturbing that system. They assumed if you could predict any property's value and predict it in advance, then they must all exist in advance. This position was soundly rejected by most, but it remained feasible until Bell and Aspect. Now we know better.

I think if you worked through a Bell test example, you would understand better why my statement above generally matches the position of most physicists.
I think we're still talking past each other here. I'm not advocating the idea that a more complete specification of the system is possible. I can understand that a more complete specification might even be impossible in principle. But that we cannot have a more complete specification of the system doesn't mean that the specification we have is a complete specification of that system.

Because we only have measurements to tell us anything about what is going on; and we know the statistical rules for the results of those measurements; and they cannot be modeled by having values prior to measurement due to Bell:
I follow that, and I'm not objecting to it. This is the crux of what I'm trying to get at. We have statistical rules which tell us the probability of measurement outcomes but those rules don't appear to tell us about the system prior to measurement. If we agree that there is a system prior to measurement, then surely we must also agree that a model which only gives us predictions about measurements is not a complete specification of the system?

we have enough to conclude that anything left to add is just a philosophical or interpretational point.
This is essentially the point I am trying to get at, the idea that there is something left or that there must be something left to add because the model is not a complete specification of the system. If there is a system prior to measurement and our model only models measurements, then the system prior to measurement is the part to add. Yes, it may only be possible to speculate about it philosophically or by way of an interpretational point, but it highlights an incompleteness in the specification of the system.

In the analogy I used in response to Paul, where we have a building and we are looking at the outside wall with 5 windows. If I tell you the probability for each window to be broken and over a number of trials those probabilities turn out to be flawlessly accurate then that is undoubtedly a very useful model. But it only tells you which window will be broken, it doesn't provide a complete specification of what happens inside the building that causes the windows to break.

We might agree that the system which we prepared causes the windows to break, but only predicting which windows will break doesn't give a complete specification of the system.

Demystifier
Science Advisor
Gold Member
Just thinking about this further. Do you need to find a way to articulate it? Does the materialist paradigm not articulate it already?

Is the idea that every part of nature must have physical properties an axiom of the materialist paradigm?
Maybe, but I think "matter" is another primitive notion that cannot be defined precisely.

Lynch101
Unless I don't understand your comment, Bell precludes that.

There are no data sets or statistical averages for quantum spins independent of the measurement setting(s). Keeping in mind that there are a multitude of statistical requirements due to the multitude of possible settings (thinking of typical Bell tests here).
It's true that there's no sample space that is, even in principle, appropriate for all observables. But we can still build models that reference the values of observables before measurement. E.g. Take a typical EPRB experiment: Alice and Bob each have one of a pair of entangled particles. At time ##t_1## Alice measures her particle (particle a) in the basis ##|\uparrow_x^a\rangle,|\downarrow_x^a\rangle## and at the same time Bob measures his particle (particle b) in the basis ##|\uparrow_z^b\rangle,|\downarrow_z^b\rangle##. A typical sample space of outcomes for this experiment could be built from the projective decomposition of the identity on ##\mathcal{H}_{t_1}##
$$\begin{eqnarray*} I &=& |\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow\rangle\langle\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ && |\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow\rangle\langle\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ &&|\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow\rangle\langle\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow|_{t_1} + \\ && |\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow\rangle\langle\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow|_{t_1} \end{eqnarray*}$$
where A and B are Alice and Bob's measuring devices respectively, and each of the four projectors above represents one of the four possible experimental outcomes.

But we could also construct an alternative model that includes statements about the spin of the particle at time ##t_0## before measurement. This times we consider the projective decomposition* of the identity on ##\mathcal{H}_{t_0}\otimes\mathcal{H}_{t_1}##

$$\begin{eqnarray*} I &=& |\uparrow_x^a,\uparrow_z^b\rangle\langle \uparrow_x^a,\uparrow_z^b|_{t_0}\otimes|\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow\rangle\langle\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ && |\downarrow_x^a,\uparrow_z^b\rangle\langle \downarrow_x^a,\uparrow_z^b|_{t_0}\otimes|\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow\rangle\langle\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ &&|\uparrow_x^a,\downarrow_z^b\rangle\langle \uparrow_x^a,\downarrow_z^b|_{t_0}\otimes|\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow\rangle\langle\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow|_{t_1} + \\ && |\downarrow_x^a,\downarrow_z^b\rangle\langle \downarrow_x^a,\downarrow_z^b|_{t_0}\otimes|\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow\rangle\langle\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow|_{t_1} \end{eqnarray*}$$
Here, we model the measured properties at ##t_0## before the measurement has occurred. All the inferences from this model would be just as valid as from the previous model.

tl;dr The formalism doesn't seem to privilege properties after measurement over properties before measurement. Whether or not these properties are real, they seem to be just as real/not real before and after measurement.

*neglecting projections that would give zero probability, like ##|\downarrow_x^a,\uparrow_z^b\rangle\langle \downarrow_x^a,\uparrow_z^b|_{t_0}\otimes|\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow\rangle\langle\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow|_{t_1}##

Lynch101
DrChinese
Science Advisor
Gold Member
But we could also construct an alternative model that includes statements about the spin of the particle at time ##t_0## before measurement. This times we consider the projective decomposition* of the identity on ##\mathcal{H}_{t_0}\otimes\mathcal{H}_{t_1}##

$$\begin{eqnarray*} I &=& |\uparrow_x^a,\uparrow_z^b\rangle\langle \uparrow_x^a,\uparrow_z^b|_{t_0}\otimes|\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow\rangle\langle\uparrow_x^a,A_\uparrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ && |\downarrow_x^a,\uparrow_z^b\rangle\langle \downarrow_x^a,\uparrow_z^b|_{t_0}\otimes|\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow\rangle\langle\downarrow_x^a,A_\downarrow,\uparrow_z^b,B_\uparrow|_{t_1} + \\ &&|\uparrow_x^a,\downarrow_z^b\rangle\langle \uparrow_x^a,\downarrow_z^b|_{t_0}\otimes|\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow\rangle\langle\uparrow_x^a,A_\uparrow,\downarrow_z^b,B_\downarrow|_{t_1} + \\ && |\downarrow_x^a,\downarrow_z^b\rangle\langle \downarrow_x^a,\downarrow_z^b|_{t_0}\otimes|\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow\rangle\langle\downarrow_x^a,A_\downarrow,\downarrow_z^b,B_\downarrow|_{t_1} \end{eqnarray*}$$
Here, we model the measured properties at ##t_0## before the measurement has occurred. All the inferences from this model would be just as valid as from the previous model.

tl;dr The formalism doesn't seem to privilege properties after measurement over properties before measurement. Whether or not these properties are real, they seem to be just as real/not real before and after measurement.
Assuming I understand what you are saying: Your model provides that it has properties at t0 (before measurement) that are like its properties at t1 (at measurement). I guess that would literally meet the criteria you set, but wouldn't really be useful at any level that I can see.

Lynch101
DrChinese
Science Advisor
Gold Member
I think we're still talking past each other here. I'm not advocating the idea that a more complete specification of the system is possible. I can understand that a more complete specification might even be impossible in principle. But that we cannot have a more complete specification of the system doesn't mean that the specification we have is a complete specification of that system.
I guess I dispute your idea that we DON'T have a complete specification of the system. If we agree no greater detail is possible, we're done. And the reason we feel there is no greater detail possible is because it would lead to statistical contradictions. The property is fully blurred prior to measurement, and has no preferred value or basis that is in any way related to the outcome of a specific future measurement.

What CAN be said, for an entangled system, is that there is a conservation rule at play in that system. Such that A+B=k or A-B=k (k being some constant or initial value).

Of course, some interpretations say more detail exists, but it is unknowable in principle. So if you are tilting in that direction, all is good.

Lynch101
Gold Member
I guess I dispute your idea that we DON'T have a complete specification of the system. If we agree no greater detail is possible, we're done. And the reason we feel there is no greater detail possible is because it would lead to statistical contradictions.
My thinking is that there is a difference between 1) no greater detail being possible and 2) there being no greater detail period.

A possible explanation for 1) is that there is a limit beyond which we cannot probe. There would be further to probe, but we simply cannot probe that far due to limitations of our instruments or a fundamental limit in nature.

This is contrasted with 2) the idea that there is nothing whatsoever there to probe.

In the context of the moon analogy, it sounds a little to me like seeing a reflection of the moon and concluding that the reflection is a complete model of the moon. The reflection is a complete specification of what it is possible to see but it isn't a complete model of the moon itself.

The property is fully blurred prior to measurement, and has no preferred value or basis that is in any way related to the outcome of a specific future measurement.
My initial understanding of the anti-realist position was that it denied even the blurred property prior to measurement, but my interpretation of what @Morbert said previously is that it doesn't even talk about the blurred property prior to measurement, it only models the measurement outcome.

It's the idea that there is a blurred property prior to measurement, which isn't modelled in the anti-realist interpretation (and perhaps cannot even be modelled in principle) which is leading me to conclude the model is not a complete description of the system. If the property cannot be modelled even in principle then would seem to suggest that no complete model of the system is possible.

Of course, some interpretations say more detail exists, but it is unknowable in principle. So if you are tilting in that direction, all is good.
I think this is the direction I'm tilting but my reasoning is that this is the only direction that we can tilt.

The idea that there is no more detail whatsoever, sounds like the initial interpretation I had of anti-realism.

DrChinese
Gold Member
Maybe, but I think "matter" is another primitive notion that cannot be defined precisely.
This is probably true but I think it can be juxtaposed with [Cartesian] dualism, or other paradigms, to suggest what the alternatives might be and what the consequences of those are.

The materialist paradigm is generally juxtaposed with other paradigms which posit the existence of the supernatural entities or substances, such as "the soul". So, if we say that the quantum system doesn't have physical properties prior to measurement we must necessarily invoke a different paradigm and the consequences that flow from that.

On a separate but related note, does the point I am trying to make about the incompleteness of the anti-realist interpretation make sense to you or can you see what part of the argument I am missing?

PeterDonis
Mentor
2019 Award
The materialist paradigm is generally juxtaposed with other paradigms which posit the existence of the supernatural entities or substances
It is? I thought the whole point of the materialist paradigm was to not posit the existence of supernatural entities or substances. See, for example, here:

https://plato.stanford.edu/entries/physicalism/

(As this article notes, "physicalism" is a synonym for "materialism".)

Lynch101
Gold Member
It is? I thought the whole point of the materialist paradigm was to not posit the existence of supernatural entities or substances. See, for example, here:

https://plato.stanford.edu/entries/physicalism/

(As this article notes, "physicalism" is a synonym for "materialism".)
Apologies, I could have worded that more clearly. It is the "other paradigms" which posit the supernatural. It is with those that materialism is usually juxtaposed to deny the supernatural.

Would objects which exist and are real not have properties by necessity? Would real and existing not be properties in and of themselves?
.
Aristotle.
Properties; are characteristic qualities that are not truly required for the continued existence of an entity ((object, thing...) but are, nevertheless, possessed by the entity (object, thing...)

.

.

Lynch101
Assuming I understand what you are saying: Your model provides that it has properties at t0 (before measurement) that are like its properties at t1 (at measurement). I guess that would literally meet the criteria you set, but wouldn't really be useful at any level that I can see.
The 2nd model isn't particularly useful, but it at least illustrates that the formalism does not privilege predictions over retrodictions. It's just the case that physicists are more interested in predictions.

(Though with that said retrodictions can be useful when reasoning through scenarios like the Vaidman bomb, or when reasoning about the trajectory of emitted particles.)

Lynch101 and DrChinese
RUTA
Science Advisor
I guess I dispute your idea that we DON'T have a complete specification of the system. If we agree no greater detail is possible, we're done. And the reason we feel there is no greater detail possible is because it would lead to statistical contradictions. The property is fully blurred prior to measurement, and has no preferred value or basis that is in any way related to the outcome of a specific future measurement.

What CAN be said, for an entangled system, is that there is a conservation rule at play in that system. Such that A+B=k or A-B=k (k being some constant or initial value).

Of course, some interpretations say more detail exists, but it is unknowable in principle. So if you are tilting in that direction, all is good.
Here is my Insight explaining the type of conservation at work for the Bell spin states. [I don't know why so many of the equation numbers have turned into ??? when referenced and a couple of equations are cut in half. Hopefully, a Moderator can explain how to fix those.]

Lynch101 and DrChinese