- #1

- 679

- 2

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter kof9595995
- Start date

- #1

- 679

- 2

- #2

dx

Homework Helper

Gold Member

- 2,112

- 41

- #3

Fredrik

Staff Emeritus

Science Advisor

Gold Member

- 10,872

- 419

For example, Heisenberg's uncertainty principle was originally just an idea about the

The same thing goes for the "superposition principle", "Einstein's postulates", "the equivalence principle", etc. They were all ideas that helped someone find a new theory, and they all appear in

- #4

- 1,761

- 53

Eugene.

- #5

- 679

- 2

Thanks, now i get it

- #6

vanesch

Staff Emeritus

Science Advisor

Gold Member

- 5,092

- 18

The principle of superposition and the linearity of the Schroedinger equation are not fundamental principles of QM. Both of them can be derived from a more general postulate, which can be easily verified by experiment. This fundamental postulate tells that in an ensemble of identically prepared systems measurements are not necessarily reproducible. Measurements can have unavoidable statistical scatter. Entire formalism of quantum mechanics can be derived from this simple postulate. This derivation can be found in theory called "quantum logic".

Could you sketch the derivation ? I don't really see how you could derive something like a superposition principle from the idea that there is statistical scatter, and not simply end up with some kind of classical statistical mechanics with "hidden random variables".

- #7

- 1,761

- 53

Could you sketch the derivation ? I don't really see how you could derive something like a superposition principle from the idea that there is statistical scatter, and not simply end up with some kind of classical statistical mechanics with "hidden random variables".

Quantum logic is derived along the same path as Boolean classical logic and classical theory of probability. It shares exactly the same postulates up to the point where the postulate of distributivity is considered. At a closer inspection it appears that the distributivity postulate of Boolean logic is not so obvious. The whole difference between quantum logic (and quantum probability theory) and classical logic (and classical probability theory) is in this postulate.

In the classical case the distributivity postulate is accepted, and then it is a simple matter to show that the entire theory has a representation in the classical phase space, where states are represented by probability distributions, logical propositions are represented by subsets of the phase space, and observables are represented by real functions on the phase space of classical mechanics.

It is possible also to reject the strict validity of the distributivity postulate and to accept a more general (orthomodular) postulate. Then we obtain another self-consistent theory of probability, which is called "quantum logic". It can be shown that this theory has a representation in a complex Hilbert space, where states are represented by unit vectors (or more generally, density matrices), logical propositions are represented by subspaces (or projections on them), and observables are represented by Hermitian operators. As a side effect, this conclusion also implies the superposition principle and the linearity of the Schroedinger equation.

So, from this point of view, quantum mechanics is nothing but a special sort of probability theory. In fact it is a generalization of the classical probability theory. Traditional classical logic, probability, and statistical mechanics appear as particular cases or approximations of this quantum formalism.

There are quite a few good books that describe this approach:

G. W. Mackey, "The mathematical foundations of quantum mechanics", (W. A. Benjamin, New York, 1963)

C. Piron, "Foundations of Quantum Physics", (W. A. Benjamin, Reading, 1976)

E. G. Beltrametti, G. Cassinelli, "The logic of quantum mechanics" (Reading, Mass. : Addison-Wesley, 1981).

V. S. Varadarajan, "Geometry of Quantum Theory".

Eugene.

- #8

- 1,017

- 3

- #9

- 1,761

- 53

In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.

Eugene.

- #10

- 679

- 2

In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.

Eugene.

But at least a wavefunction shall be normalizable, right?

- #11

- 1,761

- 53

But at least a wavefunction shall be normalizable, right?

Right.

- #12

- 5

- 0

We must notice that the plane wave as one wavefunction can't be normalizable strictly. Because it can be used to describe some real case conveniently, we use it widely.But at least a wavefunction shall be normalizable, right?

- #13

- 1,017

- 3

Why is there no meaning if in the end to find actual probabilities you definitely have to normalize away any complex modulus?In quantum mechanics states are represented by "rays" (or one-dimensional subspaces) of vectors in the Hilbert space. Two vectors differing only by any complex numerical factor correspond to the same physical state. So, state vector (or wave function) normalization is not a necessary step. Normalization is done simply out of mathematical convenience. There is no deep physical meaning in normalization.

- #14

- 1,761

- 53

Why is there no meaning if in the end to find actual probabilities you definitely have to normalize away any complex modulus?

In QM the (pure) state [tex] \psi [/tex] of a system is represented by a "ray" of vectors (or 1-dimensional subspace) in the Hilbert space. It should not matter which particular vector in the ray you choose to represent the state. Vectors [tex]|\psi \rangle[/tex] and [tex]a|\psi \rangle[/tex] represent exactly the same state if [tex]a[/tex] is any complex constant.

In QM we are interested in calculating probabilities. Generally, these are probabilities to measure property [tex]X [/tex] in the state [tex]\psi[/tex]. Properties [tex]X [/tex] are represented by subspaces in the Hilbert space or projections [tex]P_X [/tex] on these subspaces. Then the most general formula for calculating probabilities is

[tex] \rho = Tr(P_{\psi}P_X) [/tex]......................(1)

where [tex] P_{\psi} [/tex] is the projection on the 1-dimensional subspace corresponding to the state [tex] \psi [/tex]. The good thing is that this formula does not depend on which particular vector (normalized or not) in the ray [tex] \psi [/tex] is chosen to represent the state. Another good thing is that [tex]\rho [/tex] is automatically confined between 0 and 1, as it should be for probability. Yet another good thing is that this formula is easily generalized for "mixed" states described by density matrices [tex]D[/tex]

[tex] \rho = Tr(DP_X) [/tex]

In actual calculations the above method is not convenient. It is easier to choose one normalized vector in the ray [tex] \psi [/tex] (for example, this vector can be represented by a normalized wave function in the position representation), then calculate the norm of the projection of this vector on the subspace [tex]X[/tex]. One can show that this approach leads to the same result (1) that is independent on which particular vector has been chosen as long as it is in the same ray.

Eugene.

- #15

- 1,017

- 3

I don't know this formalism precisely to be sure, but I imagine a projection also changes modulus when the initial vector is scaled? Only the phase factor cancels in density matrices - not the modulus?!... Properties [tex]X [/tex] are represented by subspaces in the Hilbert space or projections [tex]P_X [/tex] on these subspaces. Then the most general formula for calculating probabilities is

[tex] \rho = Tr(P_{\psi}P_X) [/tex]......................(1)

where [tex] P_{\psi} [/tex] is the projection on the 1-dimensional subspace corresponding to the state [tex] \psi [/tex]. The good thing is that this formula does not depend on which particular vector (normalized or not) in the ray [tex] \psi [/tex] is chosen to represent the state. ...

- #16

- 1,761

- 53

I don't know this formalism precisely to be sure, but I imagine a projection also changes modulus when the initial vector is scaled? Only the phase factor cancels in density matrices - not the modulus?!

I am not quite sure what is your point. I suspect that you doubt my statement that formula (1) produces a number in the interval [0,1]. Is it so?

Frankly, I can't prove this statement off the top of my head. I am sure that the proof can be found in the classic paper on quantum probability

A. M. Gleason, "Measures on the closed subspaces of a Hilbert space", J. Math. Mech., 6 (1957), 885.

Eugene.

- #17

strangerep

Science Advisor

- 3,319

- 1,388

[...]

Only the phase factor cancels in density matrices - not the modulus?!

Indeed. Normalization of state operators (aka density matrices) is just a

convention. Cf. Ballentine p46, Postulate 2a:

Postulate 2a:To each state there corresponds a unique state operator.

The average value of a dynamical variableR, represented by the

operator R, in the virtual ensemble of evenents that may result from a preparation

procedure for the state, represented by the operator [itex]\rho[/itex], is

[tex]

\langle {\mathbf{R}} \rangle ~=~ \frac{Tr(\rho R)}{Tr(\rho)}

[/tex]

A bit later, over on p48, he imposes the "convenient normalization"

[tex]

Tr(\rho) ~=~ 1

[/tex]

so that the denominator above can be omitted.

HTH.

Share: