# The 7 Basic Rules of Quantum Mechanics

**Estimated Read Time:**8 minute(s)

**Common Topics:**data, xf, quantum, state, rules

For reference purposes and to help focus discussions on Physics Forums in interpretation questions on the real issues, there is a need for fixing the common ground. There is no consensus about the interpretation of quantum mechanics, and – not surprisingly – there is disagreement even among the mentors and science advisors here on Physics Forums. But the following formulation in terms of 7 basic rules of quantum mechanics was *agreed upon among the science advisors* of Physics Forums in a long and partially heated internal discussion on ”Best Practice to Handle Interpretations in Quantum Physics”, September 24 – October 29, 2017, based on a first draft by @atyy and several improved versions by @tom.stoer. Other significant contributors to the discussions included @fresh_42, @kith, @stevendaryl, and @vanhees71.

I slightly expanded the final version and added headings and links to make it suitable as an Insight article. A revised version of this article is published as Section 1.1 of my recent book

- Coherent Quantum Physics: A Reinterpretation of the Tradition, de Gruyter, Berlin 2019.

Table of Contents

**The 7 Basic Rules**

The basic rules reflect what is almost generally taught as the basics in quantum physics courses around the world. Often they are stated in terms of axioms or postulates, but this is not essential for their practical validity. In some interpretations, some of these rules are not considered fundamental rules but only valid as empirical or effective rules for practical purposes.

These rules describe the basis of the quantum formalism and are found in almost all introductory quantum mechanics textbooks, among them: Basdevant 2016; Cohen-Tannoudji, Diu and Laloe 1977; Dirac 1930, 1967; Gasiorowicz 2003; Greiner 2008; Griffiths and Schroeter 2018; Landau and Lifshitz 1958, 1977; Liboff 2003; McIntyre 2012; Messiah 1961; Peebles 1992; Rae and Napolitano 2015; Sakurai 2010; Shankar 2016; Weinberg 2013. [Even Ballentine 1998, who rejects rule (7) = his process (9.9) as fundamental, derives it at the bottom of p.243 as an effective rule.] There are generalizations of these rules (e.g., Auletta, Fortunato, and Parisi 2009; Busch, Grabowski, and Lahti 2001; Nielsen and Chuang 2011) for degenerate eigenvalues, for mixed states, and for measurements not defined by self-adjoint operators but by POVMs. These generalizations are necessary to be able to apply quantum mechanics to all situations encountered in practice. The basic rules are carefully formulated so that they are correct as they stand and at the same time fully compatible with these generalizations.

When stating the rules, *italic text corresponds to the physical systems, its preparation, measurement, measured values, etc.; *non-italic text corresponds to mere mathematical objects that represent the *physical system, etc.*

- A
*quantum system*is described using a Hilbert space ##\mathcal{H}##. Often, this Hilbert space is assumed to be separable. - A pure state of a quantum system is represented by a normalized vector ##|\psi \rangle## in ##\mathcal{H}##; state vectors differing only by a phase factor of absolute value 1 represent the same state. In the position representation, where the Hilbert space is the space of square-integrable functions of a position vector ##x##, ##\psi(x)## is called the
*wave function*of the system. - The
*time evolution of an isolated**quantum system*represented by the state vector ##|\psi(t)\rangle## is given by

$$\mathrm{i} \hbar\frac{\mathrm{d}}{\mathrm{d} t} |\psi(t) \rangle = H \, |\psi(t) \rangle$$

where ##H## is the Hamilton operator and ##\hbar## is Planck’s constant. This is the**Schrödinger equation**.

This rule is valid in the formulation of quantum mechanics called the Schrödinger picture. There are other, equivalent formulations of the time evolution, especially the Heisenberg picture and the Dirac (interaction) pictures, where time evolution is entirely or partially shifted from the state vector to the operators. - An
*observable of a quantum system*is represented by a Hermitian operator ##A## with real spectrum acting on a dense subspace of ##\mathcal{H}##. - The
*possible measured values of a measurement of an observable*are the spectral values of the corresponding operator ##A##. In case of a discrete spectrum, these are the eigenvalues ##a## satisfying ##A\, |a\rangle = a\, |a\rangle##. - Let ##\{|a\rangle\}## be a complete set of (generalized) eigenvectors of the self-adjoint operator ##A## with spectral values ##a##. Let the
*quantum system be prepared in a state*represented by the state vector ##|\psi\rangle##. If a*measurement of the observable*corresponding to ##A## is performed, the*probability*##p_\psi(a)## to find the*measured value*##a## is given by

$$p_\psi(a) = |\langle a | \psi\rangle|^2$$

This is the**Born rule**, in a formulation that assumes that all eigenvalues are nondegenerate. - For
*successive, non-destructive projective measurements*with discrete results, each*measurement with measuring value*##a## can be regarded as*preparation*of a new state whose state vector is the corresponding eigenvector ##|a\rangle##, to be used for the calculation of subsequent time evolution and*further measurements*. This is the**von Neumann projection postulate**.

**Formal Comments**

(2) To be precise, a pure state is not represented by a unit vector but by a unit ray, i.e. the equivalence class $$[\psi] = e^{\mathrm{i} \varphi} |\psi \rangle$$ with ##\varphi \in \mathbb{R}## and ##|\psi \rangle## being a normalized vector in ##\mathcal{H}##.

Equivalently, a pure state can be represented by a rank 1 density operator ##\rho=|\psi \rangle\langle\psi|## satisfying ##\rho^2=\rho=\rho^*## and ##Tr~\rho=1##. Mixed states are represented by more general (non-idempotent) Hermitian density operators of trace 1.

(3) It is equivalent to define the time evolution of an isolated quantum system by $$|\psi(t)\rangle = U(t)\,|\psi(0)\rangle$$

with the unitary time evolution operator

$$U(t) = e^{-iHt/\hbar}.$$

The evolution according to (3) is therefore also referred to as **unitary evolution.**

(4) Equivalently, ##A## is self-adjoint.

(6) In case of degenerate subspaces, let ##\{|a,\nu \rangle\}## be a complete set of (generalized) eigenvectors of ##A##, indexed by ##\nu##. The *probability* ##p_\psi(a)## to find the *measured value *##a## is then given by summing (or integrating) over ##\nu## i.e. over the entire ##a##-subspace

$$p_\psi(a) = \sum_\nu |\langle a,\nu | \psi\rangle|^2.$$

(7) The projection postulate is valid only under the assumptions stated; examples are passing barriers with holes or slits, polarization filters, and certain other instruments that modify the state of a quantum system passing through it. This (nonunitary, dissipative) change of the state to an eigenstate in the course of a projective measurement is often referred to as ”state reduction” or ”collapse of the wave function” or ”reduction of the wave packet”. Note that there is no direct conflict with the unitary evolution in (3) since, during a measurement, a system is never isolated.

In other cases, the prepared state may be quite different. (See the discussion in Landau and Lifschitz, Vol. III, Section 7.) The most general kind of quantum measurement and the resulting prepared state is described by so-called positive operator valued measures (POVMs).

**Comments on the Interpretation**

Not further discussing the foundations of quantum mechanics beyond this is called **shut-up-and-calculate**. It is the mode of working sufficiently for all who do not want to delve into often highly disputed foundational (and partly philosophical) problems. However, the above-mentioned rules are often considered conceptually unsatisfactory because they introduce not well-defined terms ‘probability’, ‘measurement’, and ‘observer’ to define these basic rules whereas in principle one expects that at least measurement and observation can be regarded as quantum mechanical processes or interactions which follow the same fundamental rules and do not play any special role. The associated issues are treated in different ways by different **interpretations of quantum mechanics**.

In the Copenhagen Interpretation (also called Standard Interpretation or Orthodox Interpretation; terminology and interpretation details vary), the above rules are simply operational rules that work in practice. A state vector is a tool that one uses to calculate the *probabilities of measurement outcomes,* and one is agnostic about whether the state vector represents any object that exists in reality. Rules (6) and (7) apply only when a measurement has occurred. Thus unlike in classical physics, it is not enough to specify the initial conditions of the state and let the state evolve. One must also specify when a measurement has occurred: Generally, *a measurement is understood to have occurred when a definite (irreversible, i.e., nonunitary) measurement result or outcome has been obtained; e.g., the observer records a mark on a screen.* (However passing a Stern-Gerlach magnet – which in modern terminology is a *premeasurement* only – is frequently but inaccurately considered to be a measurement, although it is described by a unitary process where even in principle no measurement result becomes available.)

A noteworthy aspect of the standard interpretation is that the state vector cannot represent the whole universe, but must exclude an observer or measuring apparatus that decides when a measurement has occurred; this is the so-called **Heisenberg cut** between the quantum and the classical world. To date, this has not been a problem in making successful experimental predictions, so practitioners are often satisfied with quantum formalism and the standard interpretation.

However, many have suggested that there is a conceptual problem with the standard interpretation because the whole universe presumably obeys the laws of physics. So there should be laws of physics that describe the whole universe, without any need to exclude any observer or measurement apparatus from the quantitative description. Then one must be able to derive the rules (5)-(7) for measuring subsystems of the universe from the dynamics of the universe. The problem of how to do this is called the **measurement problem. **A related problem, the problem of the emergence of a classical macroscopic world from the microscopic quantum description, is often considered as essentially solved by decoherence.

To solve the measurement problem, other interpretations of quantum formalism or theories have been proposed. These alternative interpretations or theories are based on different postulates than those of the standard interpretation, but seek to explain why the standard interpretation has been so successful (e.g., by deriving the rules of the standard interpretation from other postulates). The major alternative interpretations or theories that have been proposed include Everett’s Relative State Interpretation (“Many-Worlds”), the Ensemble Interpretation (or Minimal Statistical Interpretation), the Transactional Interpretation, and the Consistent Histories Interpretation.

Still, other interpretations (e.g., Bohmian Mechanics, Ghirardi–Rimini–Weber theory, the Cellular Automaton Interpretation, and the Thermal Interpretation) modify one or more of the 7 basic rules and only strive to derive the latter in some approximation, for all practical purposes (FAPP). In particular, rule (7) cannot be fundamental if one wants to interpret the state vector ##|\psi\rangle## in an ontic way, i.e., as some direct and ‘faithful’ representation of ‘externally existing reality independent from any observer, observation or measurement.

None of the interpretations currently available has been able to solve the measurement problem in a way deemed satisfactory by those interested in the foundations. So there are still major open problems both with the standard interpretation of quantum mechanics and with alternative interpretations. Fortunately, none of these problems seems to be of any practical relevance.

Full Professor (Chair for Computational Mathematics) at the University of Vienna, Austria

Yes, it did:

https://www.physicsforums.com/forums/quantum-interpretations-and-foundations.292/

Of course, Gill. There is a very active subforum on the QM forum about foundational and interpretation issues. The only rule is our general rule against purely philosophical posts. It is recognised that it will occasionally be tough to avoid such problems, so mentors will keep an eye on it to ensure it doesn't get out of hand. I want to emphasise we have the philosophy rule, not because we are anti-philosophy on this forum. We had a sub-forum on it for many years. It just became low quality, and we do not have the mentors expert to ensure it is of the appropriate standard.

Arnold has recently posted an interesting paper on his interpretation:

https://www.physicsforums.com/threads/quantum-mechanics-via-quantum-tomography.1007993/

Thanks

Bill

Such an addition would be nice indeed.

It's still dated May 11, 2019, but I now see your modified sentence.

[ @Greg Bernhardt: is there a way for a "last-modified" date to be automatically included in these Insights, as well as the original date?]

I see no such derivation at the bottom of p243. Rather, the last paragraph on that page talks about how an imperfect apparatus could give rise to the "reduced" state eq(9.18) by environmental decoherence mechanisms. This is not a "

non-destructive projective measurement"of the type addressed by Rule 7. Hence it is incorrect to link the two, as you currently do.I think you misread Ballentine's sect 9.5. As I read it, Ballentine's point (starting at the 2nd paragraph on p242) is this:

IFone supposed that all coherence were lost between the wavefunctions at points B and C, then the spin state should be (9.18), i,e., $$\rho^{inc} ~=~ \frac12 \; \Big( |+\rangle \langle +| ~+~ |-\rangle \langle -|\Big).$$ But then, the spin-recombination experiment (with sufficiently good apparatus) described on the rest of p242 and over onto the top of p243, would reveal one's error.That's what he means by "evidence" (in my humble opinion, of course, since I'm not a mind reader, though neither is anyone else around here, afaik). In other words,

IFone (mistakenly) assumed reduction at points B and C, the actual experiment furnishesevidenceof one's mistake.I'm seeing the new version.

Strange. The new version is online for 18 hours. Maybe you got a cached version. Note that I only edited a few words in that sentence.

I'm still seeing the old version, so I'll wait for the new version to appear and then proofread it.

Yes, I agree with this. I think the experiment he describes is interesting because of the fact that coherence is maintained during the passage of the neutron through a solid object, but I agree it doesn't involve any measurement at B/C so it doesn't tell us anything about state reduction as a result of measurement.

Postulate 7 in the Insight article was explicitly restricted to the special case of projective von Neumann experiments. In the formal comments to the rule, the more general case of POVM measurements is mentioned but not detailed.

Indeed, POVMs also feature state reduction under measurement, though not projective ones. Instead, the posterior state after a measurement is obtained from the prior state by the application of the POVM operator corresponding to the measurement result obtained. For a discussion of POVMs in terms of a single basic postulate see my paper Born's rule and measurement.

You can proofread it now and post your comments here.

In the Insight article, I had originally stated in the second paragraph:

I now replaced it by the more accurate

On p.241, Ballentine writes: ''Some evidence that the state vector retains its integrity, and is not subject

to any “reduction” process, is provided by […]''. No state reduction is his basic credo that he wants to support here. He says on the next page that state reduction should produce a mixed state, (9.18), and on p.243 that in a spin recombination experiment, only the pure state (9.21) is compatible with the experimental results. This is his ''evidence''. Since there was no measurement at the point B/C of investigation – only unitary 2-state dynamics happens -, this is no surprise, anyone would agree. It is not a situation where state reduction should be invoked. Thus his ''evidence'' is bogus.

On the other hand, at the end of page 243 he says

This is the effective rule referred to in the Insight article.

The point is that the foundation of quantum mechanics (as the foundation of classical mechanics) refers to closed systems. The behavior of open systems then is derived with many different methods (coarse-graining a la Kadanoff, Baym et al, projection formalism a la Zwanzig et al, influence functional formalism a la Feynman, Vernon, Caldeira, Leggett et al,…).

Yes, it's a preparation procedure. However, it is a preparation procedure that also uses the measurement outcome to label the state prepared.

According to the 7 Basic Rules as given in the Insights article, unitary time evolution only applies to an isolated quantum system. So it is also only referring to a specific experimental setup. Quantum systems in general are not isolated; you have to make a special effort to set up an isolated quantum system in the lab.

In other words, the projection postulate is the description of a specific kind of preparation procedure and not a fundamental postulate of the quantum formalism. So there's no contradiction in my statement but it's the statement!

Don't these two statements contradict each other? Rule 7 in the Insights article points out, correctly, that the projection in the projection postulate

isa preparation procedure.We neither need the projection postulate nor a generalization, because what's happening to the system and its description when interacting with a measurement or filter device depends on the specific experimental setup. It's only an opinion that Ballentine's ensemble interpretation without the projection postulate of some generalization of it were incomplete. I you consider real-world experiments, you have a preparation procedure which you have to describe well enough as the initial state of the system in the quantum formalism. Then you have some Hamiltonian describing the system's dynamics and then measure it. What's predicted by QT are the probabilities for the outcome of these measurements.

If you want to know the state of the system after these measurements you must consider this again as a preparation procedure (if you cannot include the interaction with the measurement devices with sufficient accuracy in the Hamiltonian describing the time evolution of the system). Whether or not you perform a more or less well realized projection measurement (corresponding to the collapse postulate) or not depends on the setup and cannot be generally postulated.

@A. Neumaier will have to clarify that part, as I think it wasn't in the drafts I read, or I missed it. However, the possibility is the edition and page numbers are correct, and that A. Neurmaier read that as a derivation of effective state reduction, because that is what Ballentine intends 9.21 to be. At this point, Ballentine believes that Interpretation A has a state reduction, and he is trying to explain why Interpretation A seems to work most of the time.

However, this cannot be taken to be a correct derivation of collapse for 9.28, since Interpretation A in fact does not have a state reduction at that point in the experiment being discussed. Only Ballentine's wrong conception of Interpretation A has a state reduction.

But in this formulation of classical mechanics there is an

additionalpostulate, that all of the observables commute. So you don't need to add an additional postulate to the 7 to define QM. You need to add one to defineclassicalmechanics.A implies B does not mean B implies A.

The axioms given are pretty standard. So you are saying the standard formalism of QM is wrong. Pretty strong claim – so strong I think a peer reviewed paper is in order before discussing that further.

Thanks

Bill

Advice from a person that actually teaches it. You can go directly from Susskind to Sakurai. I have both books and do prefer Sakurai.

Thanks

Bill

I personally find this one very good.

https://www.amazon.com/GUIDE-DISTRIBUTION-THEORY-FOURIER-TRANSFORMS/dp/9812384219/oks&sr=1-1

Skip Griffiths. It's so sloppy that it causes more confusion than it helps!

I also want to add strictly speaking its a Rigged Hibert space, and in fact using it you can have things like resonances that are difficult or perhaps even impossible to handle without it. Rafael Madrid did a thesis on the full technical detail, although he does not give the proof of the key Generalised Eigenvalue Theorem (also called the Nuclear Spectral Theorem);

http://galaxy.cs.lamar.edu/~rafaelm/webdis.pdf

An outline of the proof can be found here:

https://www.uni-ulm.de/fileadmin/we…SS15/qm/lnotes_mathematical_found_qm_temp.pdf

Note that the proof in the main tome on the subject by Gelfland – Generalised Functions (now – gulp I think 6 volumes) is generally considered wrong (but may now have been fixed), however correct proofs can be found in other sources. I did look up one once at a university library when I was interested in such things, but have now outgrown these sort of pedantic niceties.

Not for the beginning student, except to keep in mind as you become more advanced. For the beginning student I do HIGHLY recommend the following, not just for QM, but for any applied or pure mathematician – its worth it for its treatment of the Fourier transform alone:

https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Thanks

Bill

Well for that it's best to introduce Feynman's path integral approach from those axioms. I did it in a series of posts I made in the classical mechanics sub-forum:

https://www.physicsforums.com/threads/what-do-newtons-laws-say-when-carefully-analysed.979739/

Basically classical mechanics is QM were you can cancel most paths and get the classical Principle Of Least Action.

Thanks

Bill

Pedagogically I like Ballentine, but though many agree, not all do. And you need to work up to it – to start with I actually like Susskind's theoretical minimum book, then Griffiths, then Sakurai, then Ballentine. But having an agreed set of axioms is good – and the ones here I like.

Thanks

Bill

What is it Dirac calls it – I think complete set of commuting observables. Not that I recommend using Dirac as the book to base the axioms on. Everyone should eventually own a copy because of it historical significance, but I had the misfortune to use it as my first serious introduction to QM and now regret it. Nor do I recommend the next book I read – Von Neumann's classic – although serious students should also own a copy of that.

Thanks

Bill

Indeed. The rules in the article are excellent.

Just to elaborate on what Ballentine does. He only uses two rules:

1. The eigenvalues of Hermitian operators, O (called observables), from some vector space, are the possible outcomes of the observation represented by the operator. Or words to that effect – I can dig up my copy for the exact wording if required.

2. The average of those outcomes, E(O), is given by E(O) = Trace (OS) where S is a positive operator of unit trace called the state of the system.

Note 2 to some extent follows from 1 by Gleason's Theorem, but that is a whole thread in itself and hinges on non-contextuality which even the great Von-Neumann got 'wrong' and Greta Herman was ignored when she pointed it out – not one of sciences finest hours.

How does he get away with 2? He is sneaky and the rest are introduced as assumptions so reasonable you do not notice it's an assumption eg his derivation of Schrodinger's equation assumes the POR and Galilean transformation but it's not stated explicitly – he just assumes probabilities are frame independent which is so 'obvious' you do not recognise, unless you think about it, it's invoking the POR. Elegant, but hides important details – it's still my favourite treatment though. Also there is another assumption not mentioned in the above that for two systems treated as a single system you take the combination of vector spaces ie the space generated from the basis vectors of both spaces, but that is hardly ever mentioned, although it is an assumption. QM is a bit quirky like that – it can be presented in a way assumptions can just seem so natural you do not recognise them as assumptions. There are probably others I haven't mentioned, and perhaps do not even realise them myself.

Thanks

Bill

He gets out the numbers. But to get out their meaning as probabilities for scattering results, he needs the standard Hilbert space framework! Indeed, Zeidler starts with that…..

and he recovers only (and only an asymptotic series for) the asymptotic S-matrix, no finite time dynamics.

The problem is that you need to assume the positivity of the quantum measure. This cannot be proved for the functional integrals used in QFT – else they would produce finite results without the need for regularization.

Even in quantum mechanics, proving positivity requires somewhere a Hilbert space argument….

The usual Hilbert space formulation is primary, and the path integral formulation is secondary. The path integral formulation allows us o do quantum mechanics in the language of statistical mechanics. Not all statistical mechanics path integrals correspond to quantum theories (ie. they make lack unitary evolution etc). The constraints on the path integrals that make them correspond to quantum theories come from the Hilbert space formulation, which is why the Hilbert space formulation is primary.

In the context of relativistic quantum field theory, a set of constraints on path integrals are the Osterwalder-Schrader axioms.

http://www.einstein-online.info/spotlights/path_integrals.html

https://ncatlab.org/nlab/show/Osterwalder-Schrader+theorem

How do you ensure that in the context of a path integral?

Under your assumptions you'd just have a single free particle. Nothing asymptotic here.

Once you have a Hilbert space and a (not necessarily irreducible) unitary representation of such a group, its infinitesimal generators are represented by operators. This gives operators for energy, momentum, angular momentum, and boosts (of the total system).

It is better to study the subject in some more depth than to dabble in unfounded speculations. It takes some time to become familiar with all the relevant relations between the various approaches and to see what which approach offers and misses.

Yes. The QFT path integral is derived from the QM path integral, which is derived from the Schrödinger equation. Without the latter, one would never know that the path integral formulation is a valid formulation of QM/QFT.

No. With the path integral formulation (but without the equivalent traditional formulation), you don't even have a Hilbert space (unless you work in the closed time path setting, which is not common knowledge).

But if you consistently and exclusively do QM in the Heisenberg picture, it looks just like QFT, just with a 1D space-time in place of 4D.

Not fundamental means only effective.

Ballentine doesn't restrict to arbitrary controlled experiments but to the much smaller class of ''filtering-type measurements'' by selection, where collapse is equivalent to taking conditional expectations.

whereas Ballentine said explicitly that it is not a fundamental process.

I enjoyed something like that too from my teacher. When I first learned quantum mechanics (Xiao-Gang Wen was the lecturer), the postulates were taught very early, but not in the first lesson. If I recall correctly, the first lecture was about dimensional analysis – to introduce Planck's constant, the lecture 2 was a tour of the ultraviolet problem and old quantum physics, and the postulates were introduced in lecture 3. Then after that wave mechanics was always done in the context of the postulates.

I believe @vanhees71 has advocated something like that in these forums, though I should let him speak for himself.

What I say about Ballentine in the Insight article (in the slightly polished formulation of this morning – collapse rejected as fundamental but accepted as effective) was designed to be compatible with what he says in his book. It seems to me also compatible with a suitable interpretation of what you say in this quote.

On p.236-238, Ballentine gives a long argument for his rejection of the conventional formulation of (7) = his (9.9) in the density operator version:

He accepts it only as an effective view (p.243f)

and (rightly, like Landau and Lifshits, but unlike many other textbooks) only under special circumstances (p.247):

This is why (7) is formulated in the cautious way given in the Insight article.

The postulates don't say much without the examples. Thus one has to introduce both in parallel, starting with things that make for an easy bridge, such as optical polarization – see my insight article on the qubit.

It maps an arbitrary pure state with state vector ##\psi## into a pure state with state vector ##\hat P\psi##. That's enough in the present context.

Only if you want to prepare a pure state from an arbitrary mixed state then ##\hat P## must project to a 1-dimensional subspace.

Busch et al. nowhere refer to reduction. Your use of the term is nonstandard. What is termed state reduction is a process that turns pure states into pure states. It corresponds to the Lüders operator ##\rho\to P\rho P## discussed at the end of Section II.3.1 and in II.4.

Neither of this is the case, so making conclusions based on your assumptions has nothing to do with physics.

The conditioning is always of what is measured afterwards –

simultaneousmeasurement is different and unrelated to the collapse.But this is different from collapse, which says that given the result you can simply work with the projected state – which is what is done in practice. Without projection one must always carry the complete context around (a full ancilla in an extended Hilbert space), which is awkward when making a long sequence of observations.

Although position and momentum do not commute, there are joint position and momentum measurements (e.g., from tracks in bubble chambers or wire chambers), though their accuracy is limited by Heisenberg's uncertainty relation.

The rules are precise formulations of corresponding statements found more loosely formulated

in all textbooks cited(apart from Ballentine). Rule (7) appears there usually in an unqualified (and hence incorrect) form to which your criticism may apply. But I don't understand what you consider contentious in the actual formulation of (7). It surely applies in the cases listed under ''Formal discussion'' of (7):It is needed to know what is preparedafter passing a barrier (e.g., singling out a ray) or polarizer (singling out a polarization state).I slightly expanded the final version and added headings and links to make it suitable as an insight article. Maybe the participants of the discussion 20 months ago can confirm their continued support or voice disagreements with this public version.