# Experiment -> QM postulates

## Main Question or Discussion Point

I have the mathematical background to compute things in quantum mechanics. If you give me a standard QM problem to solve I could probably do it. However, I have an extreme lack of understanding of why the postulates of QM are set up the way they are - that the complete description of a particle is given by an element of a Hilbert space and that there are these operators that represent position, momentum, angular momentum, and so on that seem like they were pulled out of a hat.

Now the basis for any physical theory is grounded in experiment, so I would like to know the observations that establish the way quantum mechanics is set up. Thought experiments with results would be fine, I don't need the actual data from real experiments.

In other words, I would like to understand why the QM formalism is set up the way it is, from a mathematical modeling perspective.

Last edited:

Related Quantum Physics News on Phys.org
alxm
I have the mathematical background to compute things in quantum mechanics. If you give me a standard QM problem to solve I could probably do it. However, I have an extreme lack of understanding of why the postulates of QM are set up the way they are - that the complete description of a particle is given by an element of a Hilbert space and that there are these operators that represent position, momentum, angular momentum, and so on that seem like they were pulled out of a hat.
Well, you could see, for instance, the first three chapters of Landau-Lifschitz "Quantum Mechanics" which derives these operators from actual postulates, and indeed all of QM in a more formal way than the more ad-hoc/empirical way it was historically derived (and which most introductory textbooks follow, which creates a bit of confusion on what exactly needs to be assumed)

I would recommend the first chapter of Sakurai's book, where he introduces much of the machinery for the simplest possible quantum system - the two state spin system - and shows why certain things are necessary in order to explain the stern-gerlach experiment.

For me the most satisfactory axiomatic foundation of quantum mechanics is in the "quantum logic" approach. In this approach one simply takes standard axioms of classical logic and probability and realizes that one axiom (the distributive law) is not well-founded. In "quantum logic" the distributive law is replaced by a more general statement (the orthomodular law). Then the entire quantum formalism (the Hilbert space, Hermitian operators, etc.) follows from this new set of axioms. Unfortunately, you won't find these ideas in standard QM textbooks. For a beginner reading I can recommend this:

G. Birkhoff, J. von Neumann, "The logic of quantum mechanics", Ann. Math., 37 (1936), 823

G. W. Mackey, "The mathematical foundations of quantum mechanics", (W. A. Benjamin, New York, 1963) see esp. Section 2-2.

C. Piron, "Foundations of Quantum Physics", (W. A. Benjamin, Reading, 1976).

Ok thanks all. I have managed to obtain a copy of Landau and Lifschitz's tome and will look into that first.

dx
Homework Helper
Gold Member
See chapter 5 of The Feynman Lectures, Vol III.

Ok, I will see if I can get my hands on these other sources.

Landau and Lifschitz explains satisfactorially where the various operators come from for position momentum etc (you can derrive the form from symmetries). I can also accept the way he sweeps the measurement question under the rug, at least for now (postulate that there exist "classical objects" without really going into what they are).

However, he starts off with a statement about the probability of getting a certain result from a measurement, without going into where it comes from. I think the following question is the lynchpin to my understanding:

Let {qi} be a complete set for the state of a quantum particle, and let S be the corresponding parameter space. For a particular particle, let the probability of measuring the particle in region $\Omega \subset S$ be denoted P(Ω). That is, if you measured each qi independantly - which you can do since it is a complete set - P(Ω) is the probability that the measured parameters you get lie in the region Ω of parameter space.

Then why is it that there exists a complex-valued function $\psi :S \rightarrow \textbf{C}$ (the wavefunction of the particle) such that

$$P_\Omega = \int_\Omega \psi^*\psi$$

Certainly you could imagine a whole host of other possible ways of defining the probability. For example, any function $f:\{all measurable U \subset S\} \rightarrow \textbf{R}$ satisfying the probability axioms might seem reasonable. Obviously we have chosen this particular one (having to do with the existence of such a wavefunction) for physical reasons that I do not understand.

Fra
Here is my opinon on this.

This is a really good question for anyone to ask. I'd say the inference process by which experiement suggest physical laws, or postulates from where they follow are certainly not deductive in nature. To me the first step in at least trying to understand the logic of science to see that we aren't dealing with deductive reasoning. With such reasoning the best you can do is typically to compress the understanding into a set of axioms from with all rest follows deductively. This is an improvement in the sense of datacompression, but it does not solve the original problem, you are still stuck with a set of axioms and presumed connections to reality, that formally are chosen at will one way or the other.

This compressed version, may not always be the most natural way to accept the theory. It seems to depend on personal preferences.

I've personally found that if you are looking for deductive patterns, then you are easily blind yourself from seeing inductive patterns. That's my personal experience at least.

There is always an element of speculation in how understanding develops, and in my understanding of the world, this speculation is even part of the key to understand nature. I've asked myself these questions as well and my personal answer is that a different logic is needed.

It's clear that for current experimental experience, the QM formalism does produce good predictions. This alone is a kind of informal likelyhood argument in favour of the choice of the formalism where we stand. But to ask "how did we get here" is a question of how our understanding has evolved, and the historical development of QM does present a somewhat plausibe argument for the choice of speculation, that later proved to be successful.

My past asking myself these question has led me to my own quest, and I think the future problems of physics, such as finding consistency theory of quantum gravity and many other things, requires a reformulation of QM foundations. Perhaps you should take seriously every single "objection" you have to the introduction of QM, and maybe this will lead you to something even better.

My main objection to QM foundations ar the physical basis of the probability space and measurement operators. There is too much ad hoc ensembles and ideas of "repeating experiments" to get statistics, where it's sometimes clear that they happen once and never again. So large part of the basis, have no justification in the general case. I've stopped to try to make sense of this, because IMHO, it doesn't. And incidentally when you analyse these problems, one is lead into problems that are very similar to what appears in presumed QG theories. I don't think this a conincidnece. So instead of trying to invent a logic(make an excuse) that makes sense out of standard QM, I think it can be improved and extended.

Apart from that ambition, the historical development combined with the support of agreement with experiment is the best I have seen.

/Fredirk

Fra
Let {qi} be a complete set for the state of a quantum particle, and let S be the corresponding parameter space.
Even here things are fuzzy IMO:

What is the time-finite physical process, whereby an observer, can infer with certainty from interacting with it's environment, such a complete set? and be sure that the set isn't inflating as time goes by? Also, how large space can a given finite observer relate to? and exactly what happens when the observer is saturated in this respect?

IMHO, already there there is something fishy. In special cases, it's for sure easy to argue that this objection, is neglectable. But in the general case, the story is different. And if we're talking about a fundamentalk formalism, then I think the general case is what we should be able to handle?

I think this objetion seems silly to many, because how can you ever know anything for sure? My point is that we can infer a state space from interactions, however it can not be known to be complete. But OTOH, maybe this doesn't matter, because if we consider a kind of locality principle that says that a systems responds only to information at hand, then it means it responds to whatever state space it has infered. It doesn't matter if this is not complete.

But the difference may come later when you build the theory. The question is that if you ask to understand the postulates, which assumption makes more sense? It's not hard to see either, how a flawed early assumption that grow twists into the development of the theory later on.

/Fredrik

dx
Homework Helper
Gold Member
Here is my opinon on this.

This is a really good question for anyone to ask. I'd say the inference process by which experiement suggest physical laws, or postulates from where they follow are certainly not deductive in nature. To me the first step in at least trying to understand the logic of science to see that we aren't dealing with deductive reasoning. With such reasoning the best you can do is typically to compress the understanding into a set of axioms from with all rest follows deductively. This is an improvement in the sense of datacompression, but it does not solve the original problem, you are still stuck with a set of axioms and presumed connections to reality, that formally are chosen at will one way or the other.

/Fredirk

[...]
Then why is it that there exists a complex-valued function $\psi :S \rightarrow \textbf{C}$ (the wavefunction of the particle) such that

$$P_\Omega = \int_\Omega \psi^*\psi$$

Certainly you could imagine a whole host of other possible ways of defining the probability. For example, any function $f:\{all measurable U \subset S\} \rightarrow \textbf{R}$ satisfying the probability axioms might seem reasonable. Obviously we have chosen this particular one (having to do with the existence of such a wavefunction) for physical reasons that I do not understand.
It might not be your kind of intuition, but complexifying the wavefunction gives an easy road to interference patterns in double slit experiment by means of plane waves. The $$P_\Omega = \int_\Omega \psi^*\psi$$ is a natural norm on those functions and leads to a scalar product.

So then you might say,
"considering only position, the double-slit experiment demonstrates that particle's position is described by a complex wavefunction $\psi :\textbf{R}^3 \rightarrow \textbf{C}$"
and then
"considering only angular momentum, the stern gerlach experiments demonstrate that the particle's angular momentum is described by a complex wavefunction $\psi : \{+z,-z\}\rightarrow \textbf{C}$"
and you would have to construct another experiment for every different observable quantity qi in the complete set.

Then you would do a proof to show that this implies that there is a total wavefunction from the whole parameter space (product space) to the complex numbers. Is this correct?