# Tools to Enrich our Quantum Mechanics Interpretations Discourse

Many times, we encounter ourselves tangled into some Quantum Mechanics (QM) interpretations debates. Unfortunately, the standard of discussion in these debates is often low in comparison to the standard in the debates of the more technical aspects of the theory, standards which tend to be more precise and rigorous. A consequence of this is that “philosophy” has become a “bad word” and “philosophical discussions” banned and relegated to the realm of “useless and empty metaphysics” in the discourse of many physicists. In this particular forum (and similar sites), I understand the desire to clear it from the usual nonsense that usually revolves around these debates. But, to me, the problem is not philosophy, it’s the bad quality/unjustified philosophy.

We cannot reject “philosophy”. And this because of the following: a mathematical theory without a factual interpreation is just math, not physics. A scalar field satisfying Laplace’s equation can be the potential for a gravitational field or a static electric field. It’s the factual interpretation (which interprets it, e.g., as the potential for a gravitational field or a static electric field) the thing that transforms this mathematical scalar field into physics. And, of course, it’s this *interpretation map*, which maps mathematical objects to real world objects, the source of all the headaches, because of all the different takes that people adopts.

But we need the map to define/establish the theory, so what do we do then? Rather than simply negate the existence of this problem and thus allow utter chaos in this crucial topic, we need to attack it systematically, clearly, and with the same standard of precision and rigor we discuss math/physics problems. This topic is not “just philosophy”, it’s very relevant to physics, *it is actually part of physics*. and this simply because we need it to formulate the theories in the first place.

In the following, I will try to expose some tools that I found useful for discussing this topic as I browsed some of the bibliography (cited at the end). I don’t claim that the viewpoint offered here, in these tools, is the ultimate viewpoint, etc. But I do think it’s interesting and systematic enough as to be of interest, and that’s why I’m writing this.

**Scientific semantics**

As I mentioned, to me, the problem is that people often don’t go to the core issues, relevant to the analysis. I disagree with the view that going to the core of the issue is *to go to the lab*. To me, that can potentially add tremendous and unnecessary complication (which labs? which apparatuses? what’s the structure of these apparatuses? what happens in a different lab? etc.). The formulation and interpretation of a theory is something that belongs to the *theoretical framework*; a priori, it has nothing to do with labs and labs operations. Labs and labs operations become relevant in the *empirical contrastation and testing* of the theory, *which is something that comes a posteriori, once the theory has been formulated*.

So, how do we formulate the theory then? We know that a theory has some mathematical formalism, that’s the first part. But that formalism by itself has no physical meaning or information. To have this, we need a *physically interpreted mathematical formalism*. In this sense, a physical theory is composed by:

- The mathematical concept/
*construct*(say, a function f between two sets A and B, a differentiable manifold, etc.); and - its physical interpretation (e.g, A represents spacetime, B the real numbers, and f represents the intensity of some interaction).

We need both. This is nothing new and belongs to the philosophical topic known as *semiotics*, which contains *factual semantics*, the topic of interest to formulate physical theories. This philosophical theory has nothing to do with “useless metaphysics” nor with the usual use of the term “interpretation” in the QM interpretations debates (which usually attempt, e.g., to give a classical ontology to QM and things like that; here, interpretation has to be understood in the sense of the minimal things you need to formulate a physical theory), it just provides us with the tools to analyse the situation more systematically and clearly, that’s all; because the question of how to formulate a physical theory is a problem of factual semantics (semantics deals with how meaning is attached to symbols, constructs, etc., and factual semantics about how *physical* meaning is attached to symbols, constructs, etc.)

There are two types of *semiotic interpretations or semiotic maps*:

- the
*purely semantic*one; - the
*pragmatic*one.

The purely semantic one, (1), assumes a certain type of realism, just the same you need to give any sense to the fundamental tenet of the scientific method (not to be confused with Einstein’s naive classical realism and that things always have “definite values”, etc.; the realism I’m talking here only assumes that some entities/things exist and have well defined and independent properties). So, the interpretation here is very simple: property y of this physical entity will be represented by, say, the mathematical function f. And that’s it! you don’t need anything more, you simply need to make reference to the property because it exists, it’s simply there, you just point to it (semantically speaking, not an actual finger pointing). For example, if the physical entity is a “particle” and the property its “classical electric charge”, then at the abstract, mathematical level we have a set ##P## and a function ##Q:P\longrightarrow\mathbb{R}##; the *semantic map*##\varphi## relates these abstract mathematical objects to the real world: i) ##\varphi(P)=particles##; ii) ##\varphi(Q(x))=electric\:charge\: of\: x\in P##.

The pragmatic interpretation, (2), does not assume an objective property of a system; instead, it starts with a series of laboratory or whatever operations. The interpretation would be as follows: the measurements made by these particular laboratory operations will be represented by the mathematical function f. Notice how drastically different are the two.

As an example, consider the color change in a piece of litmus paper. We know that this color change indicates acidity; but, if we abide to the spirit of (1), we cannot say that this change *is*acidity. Acidity is an *objective property* of the chemical medium, while the color change in a piece of litmus paper *measures* it.

**Meaning**

In general, when one is formulating a theory, one *only* uses interpretations of the type (1), because we are only looking for a very basic thing: *basic physical meaning*, anything else will be completely superfluous here. When *making empirical contrastations or testings*, we resort to interpretations of the type (2). But, in fact, the type of operations one considers in (2) will *depend* on the basic physical meaning of the concept according to (1). It’s actually the *meaning* of the concept the thing that *justifies*, and allows us to elucidate, the type of operations we need to consider.

The word *meaning* has been used, it’s convenient to analyse it in more detail. The purely semantic interpretation takes a mathematical construct and maps it to something in the physical world. But this is not enough to give a physical meaning to this construct. The purely semantic interpretation at best gives the *physical referents* of the construct, i.e., the physical entities to which its physical interpretation refers. What’s important to note is that, in a full physical theory, this construct is not isolated. Indeed, there are also other mathematical constructs in the theory, and in general they are all interrelated to each other. These interrelations constitute what’s called the *sense* of the construct. These other constructs are also interpreted physically in terms of purely semantic interpretations. In this way, given a physical theory, the meaning of a construct is fully established by its *sense and reference*, which can be read once the theory has been fully established in its mathematical axioms and semantic interpretations. The semantic interpretation is a necessary condition, but not a sufficient one to give full physical meaning to a construct.

For example, the concept of mass in classical mechanics is mathematically an aditive function ##M:P\longrightarrow\mathbb{R}^{+}##. The semantic map ##\varphi## gives it a physical interpretation as follows: i) ##\varphi(P)=particles##; ii) ##\varphi(M(x))=inertia\: of\: x\in P##; iii) ##M## appears in the equations of motions multiplying the acceleration (which is another construct, with its own interpretation and so on).

Clearly, i) and ii) concern the semantic and interpretation part, while iii) is related to the sense of the construct. So, it’s the full theory of classical mechanics (its mathematical formalism, its semantic interpretations, its laws, etc.) what determines its meaning.

It’s well known that the above three items in the meaning of mass are enough to come up with, and to justify, operational definitions that will alow to determine the actual value of the mass in an experimental context in the empirical testing stage of the theory. For example, if ##M## is the value of the mass of the test particle and ##M_{0}## is the value of a (by convention) unit mass, we know by the third law that if the two particles interact, then (in magnitude) ##Ma_{M}=M_{0}a_{0}##. In this way, setting ##M_{0}=1## by convention, the value ##M## can be calculated in terms of the measured values of these accelerations, ##M=a_{0}/a_{M}##.

Of course, there can be many other operational methods, since this is not the way to give meaning to the concept of mass, it’s only one way of calculating it. We have used the theory to justify this method, so one may think there’s risk of circularity. But this is not the case because the “truth” of physical theories is not “proved” in terms of independent (to it) considerations. Instead, it is either established empirically that a theory is self-consistent and in agreement with the observed facts (of course, this will never be enough to “prove” the theory), or inconsistent and in disagreement with the observed facts. In this second case, of course, the theory has been empirically falsified. So, the proposed method actually can help to establish if the theory is self-consistent in the empirical realm. If, once the values of the masses have been obtained, the observed dynamics in a new experiment doesn’t agree with the second law, where the previous measured values of the mass are used in this law, then something wrong is going on with the theory, it’s not a correct description of nature. If there’s agreement, then we simply have a transitory positive “ok” for the theory for now (which, of course, is not a “proof” of the theory), and this is indeed the standard scientific practice.

Another well known example of this is the case of the Hulse-Taylor pulsar. Hulse and Taylor used the relativistic laws of orbital motion to infer the value of some orbital parameters from the value of some other parameters that were more easily to observe directly. Later, they used these calculated values to check the relativistic prediction of orbital decay by emission of gravitational waves.

A final example is the quantum z-spin, ##Sz##. The semantic rule says that, at the mathematical side, we have the z-Pauli matrix and that, at the physical side, this operator represents an objective physical property (called z-spin) of a quantum system and that the values it can take are determined by the eigenvalues of the matrix. So, this makes clear the factual referent (a quantum system) and the type of thing that this operator represents in relation to this referent (a property of the referent). In the sense realm, we know this matrix is actually one of the generators of a representation of the group SU(2), the universal covering of the rotation group SO(3); so, it’s a type of angular momentum. This gives enough information to make connections to other theories. For example, if the quantum system has electric charge, we known from the electromagnetic theory that this angular momentum will produce a magnetic moment and that, if we have a magnetic field B in the z-direction, the interaction energy will be proportional to ##SzB##. So, without even knowing if there’s some “apparatus” that supposedly measures “directly” this spin property, we can already use its meaning to combine it with other theories and to start doing physics and making predictions. But, in fact, there’s an apparatus to measure the spin, the Stern-Gerlach apparatus. But, this apparatus actually uses a non-homogeneous magnetic field to provoke a force because of the non-zero gradient of the interaction energy SzB. So, this apparatus is not measuring “directly” the spin, nor is giving meaning to it. Actually (and ironically), it’s the meaning of this quantity, in the sense previously discussed, what’s justifying the particular design of the apparatus in the first place! So, again, meaning comes first, empirical testing, a posteriori.

The previous examples, all taken from the actual practice of physics, show that this notion of meaning is not the invention of some crazy philosopher, but that it’s actually the one used by practicing physicists.

**Operationalism in Modern Physics**

In the *operationalist philosophy of physics* (like the usual Copenhagen interpretation; yes, it’s a philosophy, it makes philosophical assumptions, it’s not philosophically neutral), *the formulation of the theory itself is made in terms of pragmatic interpretations* of type (2).

This is very problematic because of the following reasons: in a pragmatic interpretation, one usually brings into the scene a lot of things (like measurement apparatuses, human observers, etc.) that are necessary for obtaining the actual measured value, but which are completely alien and superfluous to what’s needed to just give basic physical meaning to the property being measured.

As I mentioned before, this confuses two very different levels: the formulation of the theory and its, a posteriori, empirical contrastation/testing. This confusion can even introduce incoherence into the theory (since classical apparatuses are supposedly needed to interpret the variables, then quantum thoery is not a classical theory but classical physics is in its assumptions!) The value obtained in a pragmatic interpretation will, in general, be a function of the system being measured, of course, but also of the measurement apparatuses and possibly many other details from the experimental context. And this is fine if it’s clear to us that we are in the empirical contrastation stage, not in the formulation stage.

Let’s consider a simple example. Let A be a quantum operator representing some property of a quantum system. The standard operationalist interpretation says: “the eigenvalue a of the operator A is a posible result/value of *measuring* A in some *experimental context/setup*“.

Now, let’s analyse what the actual quantum theory (the actual mathematical formalism plus physical interpretation of its mathematical concepts/symbols that we as physicists use everyday in our routine calculations) says. Since this theory is providing an interpreted mathematical symbol/concept, it makes sense to apply another tool from the theory of factical semantics:*identification* of the *factual referents*. As mentioned earlier, when the mathematical concept is interpreted in terms of physical facts, *reference* is made to physical entities. These entities are the factual referents. A quantum particle can be a referent; also a measurement device and also the “mind” of a human observer.

So, let’s take the actual A given by quantum theory and investigate its referents (think of A as, say, the Sz spin operator; the hamiltonian, generator of the dynamics, can contain the enviroment, so it has to be analysed separately). And here it comes the “surprise”: in the standard formalism of quantum theory, A and its eigenvalues are *only* a function (in the semantic sense) of the quantum particle, it makes *no* reference, and its form or content *does not* depend in any form whatsoever, to measurement devices, minds, etc. So: it seems that *it is actually* the operationalist interpretation the one that it is attaching to A some “pretty useless and empty” referents, that (*ironically*) make no difference in the predictions of the actual theory. When this happens, the interpretation is said to be “adventitious”. Most pragmatic interpretations, when used to supposedly give basic physical meaning to some construct, commit this semantic sin*massively*, they introduce physical entities which are *orphan *from a mathematical correlate in the theory, they become ghosts, they are introduced in the theory by contraband, and can be removed as easily as they are introduced since they make no difference. So, in the spirit of (1), we reformulate as: “the eigenvalue a of the operator A *is a posible value of objective property* A represented by this operator”. No labs, no measurement devices, just the quantum system and its objective properties.

Notice that someone could come up with a theory in which A does depend on the apparatus. And even more, someone may even show that this new theory makes more precise predictions than QM and that these predictions have been confirmed in the lab. But, this new theory will be precisely that, a new theory, superior to QM. What the semantic analysis above shows is that, as far as the accepted formalism of QM goes, there’s no reference to apparatuses in A or its eigenvalues. So, to the standard formalism of QM, A and its eigenvalues are real, objective and independent properties of the quantum system. Of course, this may not be the case. In this sense, the semantic interpretations (and, in fact, the whole theory) are actually hypothesis about the world, which may be only an approximation. But one has to formulate the theory at some moment and make explicit these hypothesis, which in the case of QM amounts to not consider in its formalism properties that depend on other entities more than the quantum system. Maybe this is not correct, but that’s the job for another theory. These are another two levels that should not be confused.

**Objective Probability**

Similar considerations apply to the interpretation of probability. The physical property in this case is the *objective tendency or propensity* of an *individual* system to do something at any given moment (even when it’s not interacting with any “measurement apparatus”). At the mathematical side, we have a mathematical probability theory (i.e., a measure space equipped with a probability measure; in the case of quantum theory, it’s actually the non-distributive lattice of orthogonal projectors on a Hilbert space and the measure a quantum measure). The semantic map interprets the elements of the space as physical events or propositions and the probability measure as a *measure of the intensity of this tendency or propensity* of the system to that event.

In an experimental context at the testing stage of the theory, it’s evident, given this interpretation of the probability, that the *experimental frequency* obtained by measuring many identical systems, will approximate the theoretical value of the probability. But, again, this frequency cannot be taken as the meaning of the probability, it’s just one possible way of (approximately) measuring its value in the lab. Another well known method is by measuring intensities of spectral lines in a spectroscopy experiment. Clearly, these two methods themselves are only justified in light of the previous objective interpretation of the probability.

The advantages of this interpretation are that, one, contrary to the other physical interpretation, the frequentist take, the propensity can also be applied to an isolated system (like the decay on an individual isolated atom); two, since it’s purely physical, objective and independent, it doesn’t need of apparatuses and thus can be applied to systems like a free particle. Of course, its objective and independent character makes possible a purely semantic interpretation of the corresponding mathematical construct and thus avoid the risk of introducing superfluous adventitious interpretations. Of the two available physical interpretations of probability (frequency and propensity), the propensity interpretation is superior because of all the previous mentioned points and also because it allows to justify the operational frequentist approximantion via the law of large numbers (a well known theorem of mathematical probability). Finally, the concept of propensity acquires a precise meaning (in the sense previously discussed) when it’s used to interpert physically the mathematical construct of probability (the probability measure), since this provides a sense and reference for the construct; the fact that it clearly justifies the frequentist approximantion should be enough evidence of its well defined meaning (since only a well defined meaning is capable of doing something like this).

To pretend to give an operationalist “interpretation” of probability (and of any other construct) is to underestimate the creativity of experimental physicists; a great part of the creative part in the experimental stage is related to be able to come up with ingenious ways to test and measure the abstruse concepts (though well defined) given by the theoreticians. It’s not the theoretician’s task to give methods to *test* concepts, only to give physically well defined concepts. One cannot fill theoretical texbooks with dozens of different operationalist interpretations, which in general tend to be anyway highly idealized and in this way just useless irrealizable inventions whose only purpose, then, seems to be to give some psychological comfort to the neo-positivist minded reader.

**Final Comments/Remarks**

Emphasis has been made in eliminating the “lab” component from some of the statements of the theory. But this does not mean that the *environment* (considered here as an entity that exists objectively and where we include measurement apparatuses and things like that) cannot interact with the quantum system: it can, but the form of this interaction has to be modeled by the formalism of QM itself, i.e., it’s not a part of the formulation of the basic elements of the theory; instead, the corresponding and possibly non-trivial time evolution of the probability densities (for example, they can become sharp distributions after the interaction with the apparatus) has to be explained by the theory once this theory has been fully established and the environment included in its hamiltonian. In this sense, no mention is made in the formulation of the theory to subjective entities or pieces of equipment; so, the theory is free of anthropocentrism, as any physical theory worth its salt should be, and its domain is the real world rather than a set of human operations. The physical meaning of all the variables of the theory is strictly physical and objective.

All of these tools can, of course, also be applied to analyse other physical theories besides QM.

**References, Bibliography and Further Reading**

Of course, all of this system is not mine, it can be found in the first and second volumes (“Semantics I – Sense and Reference”; “Semantics II – Interpretation and Truth”) of Mario Bunge’s Treatise on Basic (scientific) Philosophy. All of the remaining tomes of the Treatise are also recommendable (they deal with scientific ontology and scientific gnoseology & methodology).

Also recommendable is his Philosophy of Physics: https://books.google.com.ar/books?id=7TSv27CGZWoC&pg=PA1&hl=es&source=gbs_toc_r&cad=3#v=onepage&q&f=false

“I am bit anti philosophy personally” my thoughts as well but I didn’t want to put it in that way, because I was already leading to a seemingly -leave the details, they are not important- type of argument. I cannot emphasize strongly enough how important it is to define an event correctly(or a natural phenomenon, but i dont know how to write it correctly, I never care to learn because once I associated its meaning, i really don’t need to remember its spelling correctly while thinking, it only becomes important when you are writing the argument you arrived at when you were thinking). Defining an argument with the least amount of best words is excatly what any science,linguistics,programmers, (other than lawyers)is trying to achieve and the reason is so that we can take the argument from there and shove it in out brain, where we’ll write it in memory by our own means, so that we can use it in an upcoming process. If we leave loose ends in arguments, intentionally or unintentionally we are also leaving a door open to (I’m sorry if this offend anyone but) Deepak Chopra style thining….but only in the verbal argument area, which we already cleared from those kind of people. I never underestimated the value of formalism, definition, semantics, but, as a physics student, I always thought there must be a way to reduce this unnecesary self repeating part. This might sound yet another self claiming genuis(<-look i cant even write it :D ) is speaking kind of argument but, it seems to me most of the weird stuff in QM is really not that difficult to understand for new generation. Non-locality? no problem, we've been playing online games where your character is suspended in a jail cell, while you can still control you bank account in a nother city-ultima online)Might sound like an ultra stupid place to start my argument but, most of that generation went in to programming to understand how to create these kind of realities, which actuall nothing more than mimicing real world, half of that half (obviously I'm making up the fractions) also got interested in physics and mathematics, and now, on top of our overall understanding is increasing exponentially, the age at which you start to be exposed and get adapted to this new environment also decreasing. And they're improving their branches, in my opinion, thanks to their early understanding of their environment. The world of brokarage, that thing that was "soo complicated that normal people need some abnormally clever guy who understands it to understand it) is almost gone now, thanks to the open source programming of automated market buy-sell bots.Most of these programmers are using things that they learned in thermodynamics classes.Interestingly its working.All this is thanks to the daily language being a rudimentary version of programming languages and programming languages being a rudimentary version of mathematics, which we understand and describe the world. A generation that is growing with the aid of the technology, no matter how much we say that internet, phones, tv this and that making us stupid, wheter they are physicists or not, will be more capable of solving these problems. I mean yes some of us getting more stupid but lets face it, this was going to happen with or without the phones or the tv.What I'm trying to say is, I really am motivated by that "we need to attack it systematically" part. We can create a better platform for QM education. (yes, i have a dreaaaammm :D)We almost have all we need, Medical students are almost about to use VR, AR sets to get educated in human anatomy, and who knows for surgeries in the future, engineers are uging all sorts of 3d rendering, modelling and animation programs, I even saw a student(i cant remember her branch it was something to do with selling real estate) using 3d simulations for vilages. Yes we have many beautiful online resources (youtube]myxx2uaqPLM, [URL='http://csi.chemie.tu-darmstadt.de/ak/immel/misc/oc-scripts/orbitals.html?id=1?lang=EN']csi.chemie.tu-darmstadt.de/ak/immel/misc/oc-scripts/orbitals.html?id=1?lang=EN[/URL] geart3, space engine, atom theabox, etc) but scattered all around the internet and you never know if one simple video about entanglement will end up with ancient egyptian gods watching us kind of conclusion...yes it happens a lot...i mean the video, no one is really watchin u)Its probably not my place to suggest anything among experienced scientists, but for someone who's dying to get into game(if graduates without any shame), it looks like, most of these "are we interpreting it right?,yea but we missed this part, that info" problems is due to our lack of not yet established common platform to understand QM. English is not my native language but I'm taking my classes in English. Although I talk and walk like Borat in real life, when it comes to reading an English paragraph its really not that difficult.You're not commuting with every letter. But when it comes to playing with more word requiring concepts, English fails for me (Perhaps I should move to Germany, they got a word for everything :D). I'm not having dificulity with the English part, I'm having dificulity the way someone ese explains it from hispoin of view with the details that he finds necessary to remind listener, which I can find irrelevant or distracting. But when I see a 3d representation of it, all the subject, all that long paragraph turns into piece of cake :D I actually wasn't trying to promote BM or anything (although I'm voting for it internally) but merely suggesting that, as the article says, we can do something about this, which approach is correct? problem. And, as a professional programmer and a physics undergradute, I believe we can create tools for the next physicists, so that they can distinguish which approach is better earlier and faster. It kind of worked with that scientists wrote an online game for online gamers to solve a protein folding of some...protein (i guess :D) and it turned out it really was useful to use some resources of (almost mathematically thinking) humans, at least in terms of timing.[Late edit] instead of more beautifully cgi'ed and really cool narated documanteries that will leave your mouth open, create accurate simulations for primarily students, then to public. We don't need a new Carl Sagan, we already have a perfect Carl Sagan.. I'm sorry I love NDT but just like Michau Kaku, it seems like instead of promoting science he started to promote himself, especially after I saw NDT in Zoolander 2. Its not 1980's anymore, showing those awe inspiring documentaries on tv or internet, is not going to make people a U turn and jump into science and commit themself into contribute even a single tiny bit of useful info. Im not saying they are useless, but they are more efficient way to really promote science...I just had to add this part :D

[QUOTE=”bhobba, post: 5389911, member: 366323″]I am however not a big fan of the usual way QM is taught ie in a semi-historical way where you need to unlearn stuff as you go just like the original pioneers did.

I like much better an approach that gets to the heart of the matter from the start:[/QUOTE]

I like both approaches. They are complementary to each other so the most complete understanding is achieved by knowing both. The problem with the direct non-historical approach is that it often makes difficult to figure out how, on earth, can one arrive at such an original idea in the first place? This is especially important if you are a scientist who wants to discover something fundamentally new by himself.

I’m also against the historical approach, but you cannot teach quantum theory to beginners by just throwing the rigged-Hilbert space postulates on them and then calculate fancy things. That’s math but not physics. Particularly for QT it’s important to stay down to earth in the physical foundations, i.e., phenomenology. So you should have in the first one or two lectures a review about the failure of classical physics but emphasizing experiments. I’d start with Rutherford’s famous gold-foil experiment and discuss why a classical picture of electrons running around a nucleus must be wrong (including that the Bohr-Sommerfeld atom must be wrong too; it’s more an example for a bad theory than a good one, and Bohr was the first who know that; that’s why he was very enthusiastic about the discovery of the modern theory by Heisenberg, Born, and Jordan in 1925). Then I’d discuss de Broglie and Schrödinger (“wave mechanics”) but point right in the beginning to the failure of their “classical-field theory interpretation”. After that you’ve formulated QM in terms of wave mechanics, including the Schrödinger equation and then you can formalize it to the Hilbert-space formalism a la Dirac (with some hints to caveats concerning the math of unbound self-adjoint operators). From then on you can build up the theory in a systematic way from symmetry principles, which are at the heart of all physics, quantum as well as classical.

One must avoid in any case a detailed treatment of the Bohr-Sommerfeld model and a naive photon picture (I think photons and the relativistic theory in general must not be discussed before you can teach QFT). Completely obsolete and useless in being rather confusing than helping are remnants of the “old quantum theory” like the wave-particle dualism or vague ideas like complementarity. The right interpretation to teach in the physics course is the minimal interperation (aka “shut up and calculate”). One can and should mention quantum correlations and long-ranged correlations aka entanglement and the empirical refutation of local deterministic hidden-variable theories to give a glance on these issues that some people consider a metaphysical problem.

Philosophy may be interesting for scholars that are familiar with this physical core of the theory. It’s not as interesting for physics itself as one might think on the first glance.