Deriving the statistical interpretation from Schrodinger's equation?

In summary: This indeed is an important assumption of the Gleason's theorem, but there is an even more important one: The aditivity of the expectation values for commuting observables.
  • #1
pantheid
53
0
So, there are two things in Quantum Mechanics that I understand are axioms: the first is the schrodinger equation, which cannot be derived. Okay fine, we have to start somewhere. The second axiom is that the integral from a to b of the wavefunction-mod-squared gives the probability of finding the particle between a and b. My question is: Is there any framework that can derive the statistical interpretation just by manipulating the schrodinger equation and building on other principles, or is this just treated as fundamental?
 
Physics news on Phys.org
  • #2
pantheid said:
My question is: Is there any framework that can derive the statistical interpretation just by manipulating the schrodinger equation and building on other principles, or is this just treated as fundamental?
There are claims that it's possible to derive the Born-rule in the Many-Worls-Interpretation. However it is still open whether these statements are true.

We had a some discussions here in the forum, but I don't think that anybody can provide a sound and complete result.
 
  • #3
Schrödinger's equation is a theorem in a symmetry based axiomatization of QM, following the ideas of Weyl and Wigner. The symmetry based axiomatization starts off with Born's rule regarding the statistical nature of the mathematical objects describing the quantum states.
 
  • #4
In Bohmian mechanics the Born rule is derived (given the odd assumption). One interesting feature though Bohmian mechanics it allows for the possibility of a system to be in a state of quantum non-equilibrium where the Born rule is not obeyed.
 
  • #5
jcsd said:
In Bohmian mechanics the Born rule is derived (given the odd assumption). One interesting feature though Bohmian mechanics it allows for the possibility of a system to be in a state of quantum non-equilibrium where the Born rule is not obeyed.

but then that's not a derivation, that's just an assumption that it works.
 
  • #6
Two leading approaches for deriving the Born rule are:

1) Deutsch and Wallace's decision theoretic approach
http://arxiv.org/abs/quant-ph/9906015
(Proc. R. Soc. Lond. A 8 August 1999 vol. 455 no. 1988 3129-3137)
http://arxiv.org/abs/0906.2718

2) Zurek's quantum Darwinism
http://arxiv.org/abs/0707.2832
http://arxiv.org/abs/0903.5082
(Nature Physics, vol. 5, pp. 181-188 (2009))

I believe both are best seen within the many-worlds interpretation, but it is not entirely clear which interpretation Zurek is using. As tom.stoer says, there is no consensus about their correctness.
 
Last edited:
  • #7
pantheid said:
but then that's not a derivation, that's just an assumption that it works.

It depends on whether you think the underlying assumption is reasonable or not.
 
  • #8
atyy said:
As tom.stoer says, there is no consensus about their correctness.
Even Wallace admits that it's still open whether his approach does succeed.
 
  • #9
jcsd said:
It depends on whether you think the underlying assumption is reasonable or not.

Exactly.

Personally I like Gleason's Theorem:
http://kof.physto.se/theses/helena-master.pdf

But that has an assumption - basis independence (physically this means non-contextuality).

Dextercioby also hit the nail on the head - while one can axiomatize QM in various ways most exposed to it would say the approach with the greatest elegance is to use Born's rule and symmetry to derive Schrodinger's equation.

You will find this approach in Ballentine - Quantum Mechanics - A Modern Development:
https://www.amazon.com/dp/9810241054/?tag=pfamazon01-20

He develops it from 2 axioms. The first axiom is the observable postulate (ie the eigenvalues are the possible outcomes of an observation) and the second is Born's rule.

Interestingly, via Gleason's Theorem, you can derive Born's rule from the first axiom, so QM is really just one axiom. Obviously that's a crop of the proverbial - more than one axiom is required. Its just in that approach the rest are derivable from quite reasonable further assumptions, such as the probabilities from Born rule does not depend on inertial frames.

Still it's very interesting to see just what the real key non intuitive assumption of QM is and that the rest really follow from that in a reasonable way.

Thanks
Bill
 
Last edited by a moderator:
  • #11
bhobba said:
via Gleason's Theorem, you can derive Born's rule from the first axiom, ...
I don't agree.

Gleason's theorem says that if a probability measure shall be introduced, then it must comply with Born's rule. But Gleason's theorem does not say that you have to introduce a probability measure at all.
 
  • #12
tom.stoer said:
I don't agree.

Gleason's theorem says that if a probability measure shall be introduced, then it must comply with Born's rule. But Gleason's theorem does not say that you have to introduce a probability measure at all.
This indeed is an important assumption of the Gleason's theorem, but there is an even more important one: The aditivity of the expectation values for commuting observables.

As shown by Bell, hidden variable theories (such as the Bohmian one) may violate this assumption in general, and yet be compatible with all measurable predictions of QM in situations when measurements are performed.
 
  • #13
Demystifier said:
The aditivity of the expectation values for commuting observables.

I suspect you are thinking about the error in Von-Neumanns hidden variable proof which made that assumption and not Gleason's Theorem which has much weaker assumptions. The only assumption is the measure must be basis independent.

I have posted the proof - you can check it for yourself - but its well known - basis independence is innocuous mathematically, and more or less required by the fact you are dealing with a vector space - physically though it implies non-contextuality which is far from trivial.

Thanks
Bill
 
Last edited:
  • #14
tom.stoer said:
But Gleason's theorem does not say that you have to introduce a probability measure at all.

Are you seriously doubting the expectation of an observation will reach a stable value?

Kolmogorov's axioms follow from Cox's axioms. Do you seriously doubt Coxes axioms can't be applied?

Of course they are assumptions but I suspect most would put them in the innocuous category.

Thanks
Bill
 
  • #15
bhobba said:
Are you seriously doubting the expectation of an observation will reach a stable value?

Kolmogorov's axioms follow from Cox's axioms. Do you seriously doubt Coxes axioms can't be applied?

Of course they are assumptions but I suspect most would put them in the innocuous category.

Thanks
Bill

Even in the context of many-worlds? (I agree there's a good argument they can, but is it really clear that the argument is completely correct and without flaw?)
 
  • #16
bhobba said:
Are you seriously doubting the expectation of an observation will reach a stable value?

Kolmogorov's axioms follow from Cox's axioms. Do you seriously doubt Coxes axioms can't be applied?

Of course they are assumptions but I suspect most would put them in the innocuous category.

Thanks
Bill
Bill,

Gleason's theorem says that there is one unique probability measure on Hilbert spaces. But Gleason's theorem does not say that you must introduce a probability measure at all. You can use Hilbert spaces for many other purposes, not only QM, and in these cases you don't introduce a probability measure. The fact that you want to introduce a probability is a matter of interpretation or applicability of the Hilbert space formalism to nature.

So first you have to use two axioms like
1) QM uses Hilbert spaces
2) QM makes probabilistic predictions
Then you can use Gleason's theorem which tells you which measure to use.

Suppose you formulate classical electrodynamics using Hilbert spaces. Does Gleason's theorem force you to introduce a probability measure for electrodynamics? I would say "no".
 
  • #17
tom.stoer said:
Gleason's theorem says that there is one unique probability measure on Hilbert spaces. But Gleason's theorem does not say that you must introduce a probability measure at all.

The operator rule says the outcomes of an observation is an eigenvalue of the operator. It implies you get outcomes with each observation.

Are you seriously doubting, when given outcome values you can't apply probability to analyse those outcomes? If so many areas of applied mathematics go down the gurgler such as actuarial science, weather forecasting, econometrics, the list goes on and on. Its such a trivial assumption that no text in such areas, to the best of my knowledge anyway, even state it as an assumption. It is of course, but its so utterly obvious no one elevates it to that status. But for some reason in QM, there are those, when confronted with observational data, say its a real issue.

Sorry, my background in applied math tells me its so trivial it does not rate a mention as a key assumption.

Thanks
Bill
 
Last edited:
  • #18
atyy said:
Even in the context of many-worlds? (I agree there's a good argument they can, but is it really clear that the argument is completely correct and without flaw?)

In many worlds one interprets it as a confidence level you are in a particular world obeying Cox's axioms and derives probabilities that way.

What I am talking about here is from the formalism - not a particular interpretation. Its simply that from the first axiom, that the outcome of an observation is an eigenvalue of the observable, you get data from carrying out the same observation under that same conditions a number of times, trials or whatever terminology you want to use.

Its utterly trivial that one can do statistical analysis of such data assuming values have a certain provability of occurring.

My background is in applied math where I studied such things as mathematical statistics and stochastic models. That one can do such things is considered so trivial it is assumed without even mentioning it that values can be assigned a probability and those probabilities must add up to one.

I believe your background is biology. Do you seriously doubt we can't do things like assign a probability to the number of offspring a member of a population will have? Of course its an assumption, but its so utterly obvious its doubtful anyone would even think of questioning it. And if you did its so widely used in such areas as weather forecasting and actuarial science you would have to really push it convincing anyone it even debatable.

Thanks
Bill
 
  • #19
tom.stoer said:
Suppose you formulate classical electrodynamics using Hilbert spaces. Does Gleason's theorem force you to introduce a probability measure for electrodynamics? I would say "no".

Does EM have the axiom of QM that observables are Hermitian operators and the possible outcomes of observations are the operators eigenvalues? Its an axiom from which you get data - data implies you can assign probabilities.

Imagine someone has given you a sequence of numbers and said they can only be certain values and asked you to analyse them. Don't you think you would get a strange look if you said - can I assume a probability can be assigned to a particular numbers occurrence?

Thanks
Bill
 
Last edited:
  • #20
Bill, you don't get the point.

In http://arxiv.org/abs/quant-ph/0405161 I found the following text statement:

Zurek said:
Indeed, Gleason’s theorem is now an accepted and rightly famous part of quantum foundations. It is rigorous – it is after all a theorem about measures on Hilbert spaces. However, regarded as a result in physics it is deeply unsatisfying: it provides no insight into physical significance of quantum probabilities – it is not clear why the observer should assign probabilities in accord with the measure indicated by Gleason’s approach

That's exactly my point.

Gleason's theorem says that if you want to construct a probability measure on a Hilbert space, then the probability measure is uniquely determined. But Gleason's theorem does not tell you why you should introduce a probability measure at all. There are applications for separable Hilbert spaces in other branches of physics, and in these other branches you do not introduced probabilities. This shows that Gleason's theorem alone is not sufficient to explain why you should do that.

bhobba said:
Its such a trivial assumption that no text in such areas, to the best of my knowledge anyway, even state it as an assumption. It is of course, but its so utterly obvious no one elevates it to that status. But for some reason in QM, there are those, when confronted with observational data, say its a real issue.
Anyway, the assumption may be trivial, but it is an assumption. No mathematical theorem about a mathematical structure forces you to interpret this mathematical structure in a certain way, or to interpret it at all.
 
Last edited:
  • #21
bhobba said:
In many worlds one interprets it as a confidence level you are in a particular world obeying Cox's axioms and derives probabilities that way.

What I am talking about here is from the formalism - not a particular interpretation. Its simply that from the first axiom, that the outcome of an observation is an eigenvalue of the observable, you get data from carrying out the same observation under that same conditions a number of times, trials or whatever terminology you want to use.

Its utterly trivial that one can do statistical analysis of such data assuming values have a certain provability of occurring.

My background is in applied math where I studied such things as mathematical statistics and stochastic models. That one can do such things is considered so trivial it is assumed without even mentioning it that values can be assigned a probability and those probabilities must add up to one.

I believe your background is biology. Do you seriously doubt we can't do things like assign a probability to the number of offspring a member of a population will have? Of course its an assumption, but its so utterly obvious its doubtful anyone would even think of questioning it. And if you did its so widely used in such areas as weather forecasting and actuarial science you would have to really push it convincing anyone it even debatable.

Thanks
Bill

So your point is that the opposite of determinism is not probability (even though that is standard colloquial usage), since technically, determinism is a subset of probability (assuming measurability, which I do think is natural). Instead, in the context of quantum mechanics, the opposite of determinism is contextuality.

If in many-worlds, we take probability to be "a confidence level you are in a particular world obeying Cox's axioms and derives probabilities that way", but don't assume non-contextuality, I presume we could get certainty (no Born rule)?
 
Last edited:
  • #22
atyy said:
So your point is that the opposite of determinism is not probability (even though that is standard colloquial usage), since technically, determinism is a subset of probability (assuming measurability, which I do think is natural). Instead, in the context of quantum mechanics, the opposite of determinism is contextuality.

Scratching my head what you even mean.

Contextuality simply means the outcome of an observation is not dependent on other parts of the observation. Think of an observation with outcomes |bi> and another with outcomes |gi> but |b1> = |g1>. We can assign the value 1 to outcome |b1> and zero for all the rest. The two observables created this way are equal. If the formalism of QM is correct we would expect the probability of getting |b1> to be the same in both cases ie not dependent on which observation is involved because the obserables are exactly the same. Since the outcomes |bi> and |gi> are two different basis with exactly the same first element this is exactly the same as saying the probability is basis independent.

Added Later:
I made a goof in replying to Demystifyer. He is correct - Gleason makes the assumption expectations are additive for commuting observables. Its exactly the same as basis independence - for commuting observables (basically) they have the same eigenvectors. Simply by changing the value of the outcome either can be made the same as the other observable, so you are really dealing with the same observable, and, even though they are physically different, if the formalism of QM is correct you would expect them to give exactly the same expectation values.

What Gleason's Theorem shows is that's enough to derive Borns rule. Basically its simply requiring we take the interpretation of observables at face value. If two obserables are exactly the same, even though the physical apparatus are different, then the probabilities of getting a particular outcome is the same.

This is why contextuality in QM is often seen as a bit strange - its really at odds with the formalism.

The out hidden variable theories have is the hidden variables can be contextual hence invalidating the theorem.

Its a subtler form of the error Von-Neumann made in his proof that hidden variables can't exist - except he assumed that expectation values are additive always - not just for commuting observables. That's true in QM - but hidden variables are another matter.

Thanks
Bill
 
Last edited:
  • #23
tom.stoer said:
Bill, you don't get the point.

I have been though this one many times before and I think I do get the point.

Its simply this - its an assumption that we can assign probabilities to outcomes - no question - but its of a very trivial sort I doubt anyone would seriously question - especially anyone with a background in applied math.

What really seems to lie at the heart of it is not that one can assign probabilities, it's that Born's rule doesn't allow the assigning of only 0 and 1 as probabilities which means determinism is caput.

What Gleason shows, is determinism and non-contextuality within the formalism of QM (ie directly from the definition of obsderables) is not allowed. It doesn't give any intuitive picture why this is - its just a mathematical 'quirk'. Personally I am very comfortable with taking the mathematics at face value.

tom.stoer said:
But Gleason's theorem does not tell you why you should introduce a probability measure at all.

Its the same why that would lead you to assign probabilities to outcomes of a sequence of data you were handed to analyse. You would naturally assign probabilities and work out things like the probability of getting a particular value.

Its the same why that when asked to analyse queue lengths at a bank teller you would assign a probability to a person arriving in a short time interval.

Its the same why if you were an actuary you would assign probabilities to people living to a certain age.

Its the same why if you were a weather forecaster you would try and figure out the probability of rain occurring tomorrow.

Its simply a natural and reasonable thing to do. Sure its an assumption you can do those things, but its an assumption that's made all the time in trying to make sense of the world, and its so prevalent I doubt anyone would seriously question it.

tom.stoer said:
Anyway, the assumption may be trivial, but it is an assumption. No mathematical theorem about a mathematical structure forces you to interpret this mathematical structure in a certain way, or to interpret it at all.

Sure. I am not questioning its an assumption. What I am questioning is why make a big deal about it.

I often say Ballentine is a very interesting treatment of QM because it's based on just two axioms - others have a lot more. Its not that those other axioms are not required - its that they have been replaced with other assumptions that seem natural, almost trivial, to the point its not specifically stated as an axiom. That you can assign probabilities to such things is a very common assumption used in many areas of applied math, so much so no one even states its an assumption - its simply assumed.

This sort of thing occurs in other areas of physics. For example one can actually derive Maxwell's equations from relativity and Coulombs law. It's a really nice proof - I like it. But the EM guru, Jackson, in his book (so I have been told anyway) broadsides it calling such proofs silly because they have hidden assumptions. Personally I am not so pessimistic - yes they have such assumptions - and I managed to locate the one in the derivation of Maxwell's equations - but to me a presentation where the assumptions are natural and almost trivial is superior to one that is opaque. Just my view.

Thanks
Bill
 
Last edited:
  • #24
bhobba said:
I suspect you are thinking about the error in Von-Neumanns hidden variable proof which made that assumption and not Gleason's Theorem which has much weaker assumptions. The only assumption is the measure must be basis independent.

:eek::eek::eek::eek::eek::eek::eek::eek:

My goof. I didn't notice this was for COMMUTING observables.

Its exactly the same as basis independence.

Thanks
Bill
 
  • #25
@bhobba, regarding posts #21 & #22, would it be better if I had said "in the context of quantum mechanics, the opposite of determinism is non-contextuality"?

@tom.stoer, I believe bhobba point's is the assigning a measure does not imply a loss of determinism. After all, one can assign a delta measure on phase space in classical mechanics and evolve it with the Liouville equation, completely deterministic. So I think what he is saying is that the non-trivial assumption in Gleason's theorem is not that a measure can be assigned, but something else like non-contextuality.
 
Last edited:
  • #26
atty, Bill, I fully understand what you are saying.

My only point is this little "why", especially in the many-worlds-interpretation. The "bare formalism" of QM does neither force you to introduce a probabilistic interpretation, nor to introduce any interpretation at all. Interpreting the formalism and explaining its relation to the real world is beyond math, it's exactly what distinguishes math from physics.

So it's your choice! You decide to interpret the bare QM formalism in a certain way. You decide to interpret it probabilistically. And you claim that in some sense it's natural to do that. Gleason's theorem supports this view in the sense that the entity you want to interpret that way is unique. But the theorem itself is not sufficient.

(and of course observations force you to do something like that b/c you observe probabilities)

So coming back to the original question: no, it's not possible to derive a probabilistic interpretation from the Schrödinger equation, simply b/c you can't derive any interpretation from any bare formalism at all. That does never follow in the sense of a mathematical proof.

The good news is that if you decide to interpret the QM formalism probabilistically (based on your observation) , then the probability measure is uniquely determined (not by the Schrödinger equation but by the underlying structure of the Hilbert space). And the Schrödinger equation itself is nearly uniquely determined once you introduce a probabilistic interpretation, simply b/c given a probability measure there must ba a unitary time evolutionrespecting tr(ρ) = 1.

So all this is pretty nice and (nearly) unique.

Anyway, some people see a big conceptual issue! Or why do you think that so many people (Everett, Deutsch, Zurek, Wallace, ...) over the decades want to motivate a probabilistic interpretation, even so they are familiar with Gleason's result? It seems that they all agree on one basic fact: the bare formalism comes w/o any probability, and the many-worlds-interpretation is rather foggy about the question what a "world" really is. So the basic question is whether the result of Gleason's theorem - in the context of many worlds with its deterministic time evolution - can be interpreted in a natural way, and whether there is a reasonable interpretation of some entity in the formalism you want to assign a probability (the "world").

So my question here is Zurek's question. It's the "why".
 
Last edited:
  • #27
atyy said:
@bhobba, regarding posts #21 & #22, would it be better if I had said "in the context of quantum mechanics, the opposite of determinism is non-contextuality"?

Sort of.

Its like this. Non contextuality and the very reasonable assumption one can assign probabilities to the outcomes of observations (but that is so prevalent in many areas of applied math no one even states it, little alone doubts it) implies the Born rule. The Born rule, as is well known, does not allow the assigning of 1 and 0 to all observables indicating a deterministic interpretation is not possible. However its not necessary for contextual theories to be deterministic.

atyy said:
@tom.stoer, I believe bhobba point's is the assigning a measure does not imply a loss of determinism. After all, one can assign a delta measure on phase space in classical mechanics and evolve it with the Liouville equation, completely deterministic. So I think what he is saying is that the non-trivial assumption in Gleason's theorem is not that a measure can be assigned, but something else like non-contextuality.

It goes beyond that. In applied math one all the time assigns probabilities to outcomes without stating that you can do it is an assumption. Its like when you calculate the acceleration of a particle, you are implicitly assuming it's at least twice differentiable. But its so obvious no one states it explicitly nor is it seriously doubted that you can do this in principle. Yet for some reason in QM such an assumption is seen as a big deal. With my applied math background its simply business as usual - its done all the time.

Thanks
Bill
 
  • #28
bhobba said:
I suspect you are thinking about the error in Von-Neumanns hidden variable proof which made that assumption and not Gleason's Theorem which has much weaker assumptions.
No, I meant what I said. See
J.S. Bell, On the problem of hidden variables in quantum mechanics
(reprinted in the "Speakable and Uspeakable in Quantum Mechanics").
Sec. 3 is about von Neumann, but Sec. 5 is about Gleason.

Or if you want someone more orthodox, see
Asher Peres, Quantum Theory: Concepts and Methods
In Sec. 7-2 Gleason's theorem, he outlines three assumptions of the theorem, and says that "the last assumption ... is not at all trivial".
 
  • Like
Likes 1 person
  • #29
Demystifier said:
No, I meant what I said.

Yep - my mistake.

Thanks
Bill
 
  • #30
Demystifier said:
In Sec. 7-2 Gleason's theorem, he outlines three assumptions of the theorem, and says that "the last assumption ... is not at all trivial".

I checked it out, and indeed it is far from trivial.

In what I said above I have been concentrating on a related assumption that I am surprised he didn't mention - namely the expectation doesn't depend on the other eigenvectors in the observable. Its the validity of equation 7.8 from the equality in equation 7.7 on page 191 ie if Px, Py orthogonal, Pu, Pv orthogonal and Px+Py = Pu+Pv then their expectations are equal.

I however have a sneaky suspicion its really the same but expressed differently.

Thanks
Bill
 
Last edited:

1. What is the statistical interpretation of Schrodinger's equation?

The statistical interpretation of Schrodinger's equation is a way to understand the behavior of quantum particles in terms of probabilities. It states that the square of the wave function, which is described by Schrodinger's equation, represents the probability of finding a particle in a particular location or state.

2. How does Schrodinger's equation relate to the uncertainty principle?

Schrodinger's equation is a fundamental equation in quantum mechanics that describes the behavior of particles at the microscopic level. It is closely related to the uncertainty principle, which states that it is impossible to know both the exact position and momentum of a particle at the same time. Schrodinger's equation helps us understand this uncertainty by describing the probability of finding a particle in a particular state or location.

3. Can Schrodinger's equation be used to predict the behavior of quantum particles?

Yes, Schrodinger's equation can be used to make predictions about the behavior of quantum particles. By solving the equation, we can determine the probability of finding a particle in a particular state or location. However, it does not give us exact information about the particle's position or momentum, as this is limited by the uncertainty principle.

4. How does the statistical interpretation of Schrodinger's equation differ from classical mechanics?

The statistical interpretation of Schrodinger's equation differs from classical mechanics in that it describes the behavior of particles in terms of probabilities rather than definite states. In classical mechanics, the position and momentum of a particle can be known with certainty, while in quantum mechanics, these properties are described by the wave function and are subject to uncertainty.

5. What are some applications of the statistical interpretation of Schrodinger's equation?

The statistical interpretation of Schrodinger's equation has many applications in modern technology, such as in the development of quantum computers and quantum cryptography. It also plays a crucial role in understanding the behavior of particles in quantum systems, such as atoms and molecules, and in predicting their properties. Additionally, it has implications in fields such as chemistry, biology, and materials science.

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
41
Views
4K
  • Quantum Interpretations and Foundations
Replies
19
Views
605
  • Quantum Interpretations and Foundations
Replies
21
Views
2K
  • Quantum Interpretations and Foundations
Replies
10
Views
1K
  • Quantum Interpretations and Foundations
4
Replies
133
Views
7K
  • Quantum Interpretations and Foundations
Replies
3
Views
841
  • Quantum Interpretations and Foundations
Replies
33
Views
3K
  • Quantum Interpretations and Foundations
4
Replies
120
Views
6K
  • Quantum Interpretations and Foundations
Replies
27
Views
2K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
Back
Top