Why is superposition prinple a first principle in QM

Quantum logic is derived along the same path as Boolean classical logic and classical theory of probability. It shares exactly the same postulates up to the point where the postulate of distributivity is considered. At a closer inspection it appears that the distributivity postulate of Boolean logic is not so obvious. The whole difference between quantum logic (and quantum probability theory) and classical logic (and classical probability theory) is in this postulate. Once the distributivity postulate is accepted, it is a simple matter to show that the entire theory has a representation in the classical phase space, where states are represented by probability distributions, logical propositions are represented by subsets of the phase space, and observables are represented by realf
  • #1
679
2
I always thought superposition principle is a consequence of the linearity of Schrodinger's equation, but it's not, instead it's a fundamental principle in QM according to some references I read recently. However I did not find any detailed explanation about this, could someone kindly explain to me? Thanks in advance
 
  • #2
It's more natural to think of the linearity of the Shchrodinger equation to be a consequence of the superposition principle.
 
  • #3
"Principles" aren't axioms. They are ideas that were stated before the theory was found, because someone felt that there were good reasons to think that the theory they were hoping to find had to be consistent with the idea. Principles are supposed to help us find new theories, but once the theory has been found, we should be using the theory instead of the principle.

For example, Heisenberg's uncertainty principle was originally just an idea about the practical impossibility of designing an experiment that let's us know a particle's position and momentum at the same time. It motivated Heisenberg to look for a mathematical representation of observables that includes a non-commutative multiplication operation. This idea was developed into QM, which contains a theorem which people also call "Heisenberg's uncertainty principle", because it is similar to the original HUP. (I actually don't think this name is appropriate, because this "HUP" is a theorem, not a principle, and because it's very different from the original HUP. It has nothing to do with practical difficulties at all).

The same thing goes for the "superposition principle", "Einstein's postulates", "the equivalence principle", etc. They were all ideas that helped someone find a new theory, and they all appear in some form in the theory that was eventually found.
 
  • #4
The principle of superposition and the linearity of the Schroedinger equation are not fundamental principles of QM. Both of them can be derived from a more general postulate, which can be easily verified by experiment. This fundamental postulate tells that in an ensemble of identically prepared systems measurements are not necessarily reproducible. Measurements can have unavoidable statistical scatter. Entire formalism of quantum mechanics can be derived from this simple postulate. This derivation can be found in theory called "quantum logic".

Eugene.
 
  • #5
Thanks, now i get it
 
  • #6
The principle of superposition and the linearity of the Schroedinger equation are not fundamental principles of QM. Both of them can be derived from a more general postulate, which can be easily verified by experiment. This fundamental postulate tells that in an ensemble of identically prepared systems measurements are not necessarily reproducible. Measurements can have unavoidable statistical scatter. Entire formalism of quantum mechanics can be derived from this simple postulate. This derivation can be found in theory called "quantum logic".

Could you sketch the derivation ? I don't really see how you could derive something like a superposition principle from the idea that there is statistical scatter, and not simply end up with some kind of classical statistical mechanics with "hidden random variables".
 
  • #7
Could you sketch the derivation ? I don't really see how you could derive something like a superposition principle from the idea that there is statistical scatter, and not simply end up with some kind of classical statistical mechanics with "hidden random variables".


Quantum logic is derived along the same path as Boolean classical logic and classical theory of probability. It shares exactly the same postulates up to the point where the postulate of distributivity is considered. At a closer inspection it appears that the distributivity postulate of Boolean logic is not so obvious. The whole difference between quantum logic (and quantum probability theory) and classical logic (and classical probability theory) is in this postulate.

In the classical case the distributivity postulate is accepted, and then it is a simple matter to show that the entire theory has a representation in the classical phase space, where states are represented by probability distributions, logical propositions are represented by subsets of the phase space, and observables are represented by real functions on the phase space of classical mechanics.

It is possible also to reject the strict validity of the distributivity postulate and to accept a more general (orthomodular) postulate. Then we obtain another self-consistent theory of probability, which is called "quantum logic". It can be shown that this theory has a representation in a complex Hilbert space, where states are represented by unit vectors (or more generally, density matrices), logical propositions are represented by subspaces (or projections on them), and observables are represented by Hermitian operators. As a side effect, this conclusion also implies the superposition principle and the linearity of the Schroedinger equation.

So, from this point of view, quantum mechanics is nothing but a special sort of probability theory. In fact it is a generalization of the classical probability theory. Traditional classical logic, probability, and statistical mechanics appear as particular cases or approximations of this quantum formalism.

There are quite a few good books that describe this approach:

G. W. Mackey, "The mathematical foundations of quantum mechanics", (W. A. Benjamin, New York, 1963)

C. Piron, "Foundations of Quantum Physics", (W. A. Benjamin, Reading, 1976)

E. G. Beltrametti, G. Cassinelli, "The logic of quantum mechanics" (Reading, Mass. : Addison-Wesley, 1981).

V. S. Varadarajan, "Geometry of Quantum Theory".

Eugene.
 
  • #8
Actually to me it seems QM isn't strictly really linear?! Because after adding solutions you still have to normalize them.
 
  • #9
Actually to me it seems QM isn't strictly really linear?! Because after adding solutions you still have to normalize them.

In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.

Eugene.
 
  • #10
In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.

Eugene.

But at least a wavefunction shall be normalizable, right?
 
  • #11
But at least a wavefunction shall be normalizable, right?

Right.
 
  • #12
But at least a wavefunction shall be normalizable, right?
We must notice that the plane wave as one wavefunction can't be normalizable strictly. Because it can be used to describe some real case conveniently, we use it widely.
 
  • #13
In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.
Why is there no meaning if in the end to find actual probabilities you definitely have to normalize away any complex modulus?
 
  • #14
Why is there no meaning if in the end to find actual probabilities you definitely have to normalize away any complex modulus?

In QM the (pure) state [tex] \psi [/tex] of a system is represented by a "ray" of vectors (or 1-dimensional subspace) in the Hilbert space. It should not matter which particular vector in the ray you choose to represent the state. Vectors [tex]|\psi \rangle[/tex] and [tex]a|\psi \rangle[/tex] represent exactly the same state if [tex]a[/tex] is any complex constant.

In QM we are interested in calculating probabilities. Generally, these are probabilities to measure property [tex]X [/tex] in the state [tex]\psi[/tex]. Properties [tex]X [/tex] are represented by subspaces in the Hilbert space or projections [tex]P_X [/tex] on these subspaces. Then the most general formula for calculating probabilities is

[tex] \rho = Tr(P_{\psi}P_X) [/tex]......(1)

where [tex] P_{\psi} [/tex] is the projection on the 1-dimensional subspace corresponding to the state [tex] \psi [/tex]. The good thing is that this formula does not depend on which particular vector (normalized or not) in the ray [tex] \psi [/tex] is chosen to represent the state. Another good thing is that [tex]\rho [/tex] is automatically confined between 0 and 1, as it should be for probability. Yet another good thing is that this formula is easily generalized for "mixed" states described by density matrices [tex]D[/tex]

[tex] \rho = Tr(DP_X) [/tex]

In actual calculations the above method is not convenient. It is easier to choose one normalized vector in the ray [tex] \psi [/tex] (for example, this vector can be represented by a normalized wave function in the position representation), then calculate the norm of the projection of this vector on the subspace [tex]X[/tex]. One can show that this approach leads to the same result (1) that is independent on which particular vector has been chosen as long as it is in the same ray.

Eugene.
 
  • #15
... Properties [tex]X [/tex] are represented by subspaces in the Hilbert space or projections [tex]P_X [/tex] on these subspaces. Then the most general formula for calculating probabilities is

[tex] \rho = Tr(P_{\psi}P_X) [/tex]......(1)

where [tex] P_{\psi} [/tex] is the projection on the 1-dimensional subspace corresponding to the state [tex] \psi [/tex]. The good thing is that this formula does not depend on which particular vector (normalized or not) in the ray [tex] \psi [/tex] is chosen to represent the state. ...
I don't know this formalism precisely to be sure, but I imagine a projection also changes modulus when the initial vector is scaled? Only the phase factor cancels in density matrices - not the modulus?!
 
  • #16
I don't know this formalism precisely to be sure, but I imagine a projection also changes modulus when the initial vector is scaled? Only the phase factor cancels in density matrices - not the modulus?!

I am not quite sure what is your point. I suspect that you doubt my statement that formula (1) produces a number in the interval [0,1]. Is it so?

Frankly, I can't prove this statement off the top of my head. I am sure that the proof can be found in the classic paper on quantum probability

A. M. Gleason, "Measures on the closed subspaces of a Hilbert space", J. Math. Mech., 6 (1957), 885.

Eugene.
 
  • #17
[...]
Only the phase factor cancels in density matrices - not the modulus?!

Indeed. Normalization of state operators (aka density matrices) is just a
convention. Cf. Ballentine p46, Postulate 2a:

Postulate 2a: To each state there corresponds a unique state operator.
The average value of a dynamical variable R, represented by the
operator R, in the virtual ensemble of evenents that may result from a preparation
procedure for the state, represented by the operator [itex]\rho[/itex], is
[tex]
\langle {\mathbf{R}} \rangle ~=~ \frac{Tr(\rho R)}{Tr(\rho)}
[/tex]

A bit later, over on p48, he imposes the "convenient normalization"
[tex]
Tr(\rho) ~=~ 1
[/tex]
so that the denominator above can be omitted.

HTH.
 

Suggested for: Why is superposition prinple a first principle in QM

Replies
15
Views
747
Replies
3
Views
670
Replies
22
Views
771
Replies
10
Views
839
Replies
11
Views
770
Replies
30
Views
1K
Replies
2
Views
483
Back
Top