On the use of Hilbert Spaces to represent states

the_pulp
Messages
206
Reaction score
9
There are a lot of coherent definitions of Hilbert spaces. Let's take the wikipedia one and let me ask you some cuestions:
A Hilbert space:

1) Should be linear
2) Should have an inner product (the bra - ket rule to get an amplitude)
3) Should be complete (every cauchy sequence should be convergent)

Why states should have these properties?

Thanks!

Pd: I would like to receive answers that are not dependent of the results of QM. For example, we can make an argument that starts by accepting the Heisenberg principle and from this, perhaps, we can derive that the state should be represented by an element of a Hilbert space. But I want to go all the way around. I would like to prove or explain by really basic hypothesis why 1), 2) and 3) should hapend and then, for example, prove Heisenberg principle (adding first, all the other stuff such as the Born rule, evolution equation and such).
 
Physics news on Phys.org
There is an item in the general physics forum which seems to be asking the same question.
 
I saw that one, but it doesnt. That one tries to represent classical mechanics by the "states/operators" language".
Here what I want to know is why the elements of a Hilbert space are suitable to represent states and why we are not using, for example, some topological space (which are more general and don't have a notion of distance) or whatever other set of mathematical objects. Why do we want that the mathematical objects that represent the states should be linear, complete and have an inner product?
 
the_pulp said:
[...] what I want to know is why the elements of a Hilbert space are suitable to represent states and why we are not using, for example, some topological space (which are more general and don't have a notion of distance) or whatever other set of mathematical objects. Why do we want that the mathematical objects that represent the states should be linear, complete and have an inner product?
Try Ballentine chapters 1-3.

(Sorry, I don't have time for a detailed reply right now, but I think Ballentine will take you at least part-way towards an answer...)
 
The basic and fundamental postulate of QM - the Superposition Principle - implies it must be a vector space. It is a mathematical convenience to require that the bras and kets can be put into one to one correspondence which implies it is a Hilbert Space (Riesz Representation Theorem) - and some important and powerful theorems such as Gleason's Theroem hold for Hilbert Spaces.

But you do not have to do that and you can extend it to be something more general called a Rigged Hilbert Space which has mathematical advantages in allowing the existence of things like Dirac Delta functions ruled out of a Hilbert Space.

Thanks
Bill
 
Last edited:
You can start from the observables, motivated by the experimental results you can deduce the mathematical structure of the set of the observables. Then some mathematical theorems will force on you the use of Hilbert spaces.
 
Some mathematical theorems? Which ones? What is the line of reasoning that, through those theorems, I will be able to derive that the only choice is to work with elements of Hilbert Spaces?

Thanks for all!

Ps: I will check Ballentine!
 
Well if you want rigor then it can all in fact be derived from 5 reasonable axioms that on the surface have nothing to do with Hilbert Spaces - check out:
http://arxiv.org/pdf/quant-ph/0101012v4

The key Axiom is the requirement of continuous transformations between pure states and while the proof it leads to Hilbert Spaces is long (it took me about a week to go through checking each step) the actual math is basically linear algebra.

Thanks
Bill
 
Well if you want rigor then it can all in fact be derived from 5 reasonable axioms that on the surface have nothing to do with Hilbert Spaces - check out:
http://arxiv.org/pdf/quant-ph/0101012v4

Ive already read that one. I did not like those axioms, especially the 2nd one (K=K(N)) but I didnt give them so much time as you. Perhaps if I give them another try I would find them different.

I am reading ballentine right now. Its a pitty I haven't read it before.

I have another question related to this topic. Why does the Hilbert Space that we use in QM should be complex. What would happen if it were a real Hilbert Space? Would we arrive to classical physics?
 
Last edited:
  • #10
Search for "quantum logic" and "Piron's theorem".

Eugene.
 
  • #11
the_pulp said:
Ive already read that one. I did not like those axioms, especially the 2nd one (K=K(N)) but I didnt give them so much time as you. Perhaps if I give them another try I would find them different.

Its not really a question of if you like the axioms or not - I personally prefer others - its a question do they imply a Hilbert Space - they do.

the_pulp said:
I am reading ballentine right now. Its a pitty I haven't read it before.

Its by far the best book on QM I know - my go to book

the_pulp said:
I have another question related to this topic. Why does the Hilbert Space that we use in QM should be complex. What would happen if it were a real Hilbert Space? Would we arrive to classical physics?

Well at a physical level its required for interference effects - you can't model those in a Real Hilbert Space. At a mathematical level you require it because you want to be able to continuously go from one state to another via other states which would seem to be a key requirement for how physical systems should evolve - you must go to complex numbers to allow that.

For a bit more detail check out:
http://www.scottaaronson.com/democritus/lec9.html

Thanks
Bill
 
  • #12
Its not really a question of if you like the axioms or not - I personally prefer others - its a question do they imply a Hilbert Space - they do.

Yes, its a question of if I like it. In fact I started this post because I didnt like the Hilbert axiom.

Well at a physical level its required for interference effects - you can't model those in a Real Hilbert Space. At a mathematical level you require it because you want to be able to continuously go from one state to another via other states which would seem to be a key requirement for how physical systems should evolve - you must go to complex numbers to allow that.

For a bit more detail check out:
http://www.scottaaronson.com/democritus/lec9.html

Yes! that was the answer I was looking.

Try Ballentine chapters 1-3.
Its by far the best book on QM I know - my go to book

Yes, it is the introduction I needed when I started reading this things.

What I didnt like about Ballentine is that it assumes the Born Rule. I saw a paper that demonstrates it assuming certain basic assumptions (such as representating with Hilbert Spaces -This is the reason of my post-, coherence of rules of calculating probabilities from equivalent Hilbert representations and not much more). It is:

Derivation of the Born Rule from Operational Assumptions, Simon Saunders
http://users.ox.ac.uk/~lina0174/born.pdf

I strongly recommend it (and if you can give me any feedback about it, would be very helpful).

Thanks for all your help
 
  • #13
the_pulp said:
Derivation of the Born Rule from Operational Assumptions, Simon Saunders
http://users.ox.ac.uk/~lina0174/born.pdf

I strongly recommend it (and if you can give me any feedback about it, would be very helpful)

If you are worried about the Born Rule - don't be. Its not a separate assumption - a very famous (but surprisingly not as well known as it should be) theorem called Gleasons theorem guarantees it:
http://kof.physto.se/theses/helena-master.pdf

My route to QM is as follows. Suppose you have a system with n possible states. Map them to the orthonormal basis vectors of an n dimensional vector space. Now apply coordinate invariance to deduce the physics remains the same if you transform to any other orthonormal basis meaning any vector in the vector space must also be a possible state. Use the requirement for continuous transformations between states to make it a complex vector space. Apply Gleasons Theorem and bingo - you have QM. It also explains why QM is a statistical theory - for a deterministic one you should be able to map the states to 0 or 1 - but Gleasons theorem shows you can't do that - the continuity assumption basically fouls you up.

Had a peek at the link. Yea - know that one - its basically an update of David Deutsches argument. Its fine as far as I am concerned - I just prefer Gleasons Theorem for a few reasons. First the objection in the paper it only applies to space of dimension 3 and above has now been rectified with a simpler modern version based on POVM's instead of resolutions of the identity. Secondly it immediately shows why QM must be statistical in a very elegant way. And finally in my route to QM coordinate invariance (strongly related to and implying non contextuality) is central and Gleasons Theroem follows from that. In fact Gleasons Theorem really shows what a powerfull assiumption non contextuality really is.

But that is simply a personal reference. Its good two methods lead to the same result.

Thanks
Bill
 
Last edited by a moderator:
  • #14
Is there really a proof that the structure of QM is required, ie. emergent quantum mechanics is doomed to fail?

Isn't it more that QM is our most fundamental theory at the moment, extremely successful, with some poorly understood bits like measurement, so we accept it for the time being?
 
  • #15
atyy said:
Is there really a proof that the structure of QM is required, ie. emergent quantum mechanics is doomed to fail?

Isn't it more that QM is our most fundamental theory at the moment, extremely successful, with some poorly understood bits like measurement, so we accept it for the time being?

Of course there is no proof the structure of QM is required - that is an experimental matter. All I am suggesting is a way to present QM exists that makes it seem more reasonable and clarifies its basic assumptions and structure. And no it is indeed possible QM can emerge from another theory eg Primary State Diffusion, although I personally believe it is fundamental - but my personal favorite interpretation, the ensemble interpretation, does whisper in your ear there is more to it.

The poorly understood areas are being clarified further all the time eg we know a lot more about decoherence and its effects than when Bohr and Einstein were around and having their truly magnificent debates - and how it resolves a number of the issues. I believe it will all eventually be completely clarified.

Thanks
Bill
 
  • #16
My route to QM is as follows. Suppose you have a system with n possible states. Map them to the orthonormal basis vectors of an n dimensional vector space. Now apply coordinate invariance to deduce the physics remains the same if you transform to any other orthonormal basis meaning any vector in the vector space must also be a possible state. Use the requirement for continuous transformations between states to make it a complex vector space. Apply Gleasons Theorem and bingo - you have QM.

Yes, that was sort of my route. I was not very sure about:
1) The linearity of the states (I think that when someone says to me that the state space should be linear because of the superposition principle it sounds like "the state space should be linear because the state space should be linear").
2) The use of complex numbers

And, related to the born rule, I was not using Gleasons theorems because I was thinking about Saunder's paper, but it is more or less the same (I will read your paper). Now, Ballentine and the link you sent me about complex numbers I think I've got the ideas a little bit more ordered.

Thanks!
 
  • #17
Do these derivations show that states aren't elements of Hilbert spaces, but rays?
 
  • #18
the_pulp said:
Yes, that was sort of my route. I was not very sure about:
1) The linearity of the states (I think that when someone says to me that the state space should be linear because of the superposition principle it sounds like "the state space should be linear because the state space should be linear").

The reason it is a vector space depends on mapping the states to basis vectors and applying invariance - its logically equivalent to saying its a vector space to begin with - which is logically equivalent to the principle of superposition. It just depends on what appeals to your intuition more - that's all. To me the principle of superposition seems like a rabbit pulled out of a hat but applying invariance is something that is very intuitive from everyday experience.

Thanks
Bill
 
  • #19
What do yo mean by "applying invariance"? invariance of what under change of what? perhaps applying invariance of probability under change of equivalent states representation as in "Saunders"?
 
  • #20
The only modification I'd make to that clear and nice structure is that I think one should always base the foundations on empiricism-- what can be measured. So we should not start with possible states, we should start with possible outcomes to measurements, and then simply associate states to those outcomes. It's not a trivial difference-- we are making some kind of stand when we claim there is a "state" associated with an outcome to a measurement, and indeed it is generally true that there is a whole family of degenerate states that all lead to that outcome. So we may need to combine different measurements to resolve the degeneracies and generate a concept of individual "states", and if all our efforts to separate two states fail to break their degeneracy, then we can assert those states were never different in the first place. But then we have to hope that statistical mechanics will back us up-- if we find a given state tends to be multiply populated relative to other states of the same energy (or worse, when dealing with identical fermions), then we have a real problem-- either we have states that don't obey the statistics, or we have states that we are going to claim are different even though we cannot resolve their differences using any known measurements. Those kinds of issues have apparently not been encountered, but it shows the kinds of issues that could be lurking if we are careful to separate eigenvalues of measurements from rationalistic descriptions of states. To be sure, this is all just the eigenvalue/eigenstate structure of quantum mechanics, which allows us to essentially equate the concepts, but we always have to recognize which one we are anchoring our physics on in case we encounter surprises.

Also, the continuity principle looks a bit different when you think about eigenvalues instead of states-- it then means that if we can do a measurement that locates the particle in some box centered at A, or a box centered at B, we should be able to do a measurement that can locate the particle in a box centered anywhere between A and B. This sounds like a more physically reasonable assumption than assuming that the states themselves be continuous, which to me sounds like another kind of "rabbit."
 
Last edited:
  • #21
the_pulp said:
What do yo mean by "applying invariance"? invariance of what under change of what? perhaps applying invariance of probability under change of equivalent states representation as in "Saunders"?

The idea is you map the states of a system to the orthonormal basis vectors of a vector space. Now you make a change of basis to another orthonormal basis (ie you rotate your coordinate system) so that your states now have a different representation. The laws of physics should not depend on your choice of basis (ie that you have rotated your coordinate system) so you conclude that any normalised vector in the vector space can be a possible system state.

I am not the guy to come up with it - Victor Stenger did (and he probably was not the only one):
http://www.colorado.edu/philosophy/vstenger/Nothing/SuperPos.htm
http://www.colorado.edu/philosophy/vstenger/Nothing/D_Gauge.pdf

'This is also called the superposition principle and is responsible for much of the difference between quantum and classical mechanics, in particular, interference effects and so-called entangled states. As we saw in chapter 4, linearity and thus the superposition principle are required to maintain point-of-view invariance. That is, point of-view invariance can be used as a motivation for the superposition principle, which, in conventional developments, is simply a hypothesis that almost seems to be pulled out of a hat.'

Thanks
Bill
 
Last edited by a moderator:
  • #22
atyy said:
Do these derivations show that states aren't elements of Hilbert spaces, but rays?
It's been a while since I looked at any of them, but I recall always getting the feeling that they slipped normalization in for convenience.

IMHO, the ray approach is more fundamental -- and concentrating on angles between rays, and the dynamical group action which maps rays into other rays. In this regard, there are papers which emphasize the use of a Fubini-Study metric topology rather than the usual Hilbert norm topology, but they always seem (iirc) to use an underlying Hilbert space with normalizable vectors. But for unbounded operators and continuous spectra we already know that some sort of triplet space (rigged Hilbert, or partial inner product) is a more natural fit. I don't know of any papers which use Fubini-Study topology on rays exclusively, but this seems to me an interesting line of research.
 
  • #23
the_pulp said:
What do yo mean by "applying invariance"? invariance of what under change of what?
Underlying any (class of) dynamical system is a dynamical group. (Think canonical transformations in Hamiltonian mechanics, for example.) The observables for that system come from the generators of that group (or higher products thereof).

I still don't have enough spare time to write a more detailed explanation, but I mentioned a little about a related point in this thread:

https://www.physicsforums.com/showthread.php?t=548541
(See my post #15, as modified by post #26.)

Actually, it might be worth your time to read that whole thread if you haven't done so before.
 
  • #24
Sorry I come back with the same topic but I've been trying to clear up my ideas but there is a little something that I still don't know why we are taking as obvious.

1) Someone said (and I agree) that first there are experiments that can take certain values and we associate those values with states. So, until here, an experiment is an association of certain values to certain states
2) To make 1) mean something we have to define what is an state. We know that set of states has to accomplish 2 assumptions
a) Continuity (Continuity of experiments or continuity of states I think its more or less equivalent -I can be wrong- and this in some point will lead us to the use of complex numbers)
b) Metric Space (because we know that there is a probability between two states, there is a notion of distance between two states).
The main ingredient that I am not assuming is Linearity, but linearity (and some bits more such as scalar product) can lead us to Born Rule (through Gleasons or Saunders). What is the justification of Linearity?
Perhaps you have already explained that to me. In that case I couldn't detect that so I would be very thankful if you can enlight me once again!

Ps: and, again I don't think that "superposition principle" is the answer because to me, saying "Linearity" occurs because of "Superposition Principle" is like saying "Linearity" occurs because of "Linearity".
 
  • #25
the_pulp said:
What is the justification of Linearity?
I don't think there is one, other than "it's the simplest possibility, and our best experiments haven't given us a reason to consider another one".
 
  • #26
Fredrik said:
"it's the simplest possibility,
This much is agreed by everybody but it's kind of a lazy reason not to consider other possibilities :biggrin:

Fredrik said:
and our best experiments haven't given us a reason to consider another one".
This is simply not true. Perhaps they haven't given you reasons to consider alternatives, but I assure you that is not the case for many theorists and experimentalists that have the physical (and mathematical) maturity to realize that the current QM formulations are only really good approximations in the linear limit.
Theories try to accommodate the world we live in, not the other way around (as you seem to imply), and experiments tell us that the world is not linear, so any linear theory that tries to model our world will logically be superseded by a non-linear one.
 
  • #27
the_pulp said:
What is the justification of Linearity?
Pulp, as you know Linearity is an axiom or postulate of QM, and as such it doesn't have any justification in the strict sense, and you won't find any, it doesn't need any. That's the thing with axioms.
If you want some "psychological" justification or motivation, they have already been suggested, there's only two: it simplifies the math a big deal, and most importantly it works (to a very good approximation). The fact that experiments tell us that the superposition principle doesn't hold in determined circumstances (you can consult the bibliography on Nonlinear quantum optics, Nonlinear QED and QFT,etc...or about the arbitrary and esoteric regularization procedure in QFT) is a hint that QM is not the definite theory, of course this is a kind of trivial observation since in every science book it is explained that no theory is definitive, just the current best approximation to experiment.
 
  • #28
TrickyDicky said:
... and experiments tell us that the world is not linear, so any linear theory that tries to model our world will logically be superseded by a non-linear one.

Which experiments tell us that?
 
  • #29
martinbn said:
Which experiments tell us that?
Not sure if this is a rhetorical question, maybe an easier question if you ask an engineer would be what outcome of actual observations and experiments is perfectly linear, the list would be shorter.
If one goes to certain contexts like for intance high enough intensities (energies), to start somewhere:say within the EM field, you have electrical characteristics of p-n junctions or in the optical region,you might have heard about high-intensity laser beams.
 
  • #30
My opinion on the matter is that we use vectors to name states for no deeper reason than intellectual inertia combined rooted in the fact that the original development of QM was phrased in such a naming scheme.

And I believe that if QM was originally phrased in some other naming scheme, the Hilbert space version would quickly be discovered; if not intuitively, then systematically as an application of representation theory.
why we are not using, for example, some topological space (which are more general and don't have a notion of distance) or whatever other set of mathematical objects.
We do. Common other objects in elementary QM include the space of rays in Hilbert space, the space of density operators, and the space of positive linear functionals on the algebra of observables.

Why do we want that the mathematical objects that represent the states should be linear, complete and have an inner product?
We understand vector spaces very well, so we look for ways to apply them to solve problems. (e.g. representation theory)

Completeness is for usual reasons; e.g. that calculus should be well-behaved.

IMO, we want an inner product for no deeper reason than that's what we chose to use in formulas (but really, history+intellectual inertia again). If we chose some other method to connect Hilbert space elements to the notion of a state, or some other naming scheme entirely, then we would use that instead. (and the laws of physics would take a correspondingly different form)

There are other justifications, though. e.g. the inner product is closely related to the operator norm, and the operator norm is closely related to the possible outcomes of an observable. Specifically, it is* the magnitude of the largest outcome. (and through clever arithmetic, I believe you can use the norm to determine all outcomes of an observable)

*: With the usual caveats for handling the notion of "largest" appropriately.
 
  • #31
Pulp, as you know Linearity is an axiom or postulate of QM, and as such it doesn't have any justification in the strict sense, and you won't find any, it doesn't need any. That's the thing with axioms

What you said is mathematically correct. But perhaps you can express those axioms as theorems of another equivalent set of axioms that, maybe, sound more familiar than the first ones.

For example, I have read in a lot of places that the conmutation relationship inherent in Heinsenberg Principle was an axiom of QM. This, being mathematically true, sounded to me that when "god created" QM he thought that, for no reason, the conmutator of p & x should be something different than 0. And I find that idea ridiculous.
Later, I've read in some other places (for example, in the Ballentine book mentioned in this post), that the form of P in x representation (and as a consequence, the conmutator between them an hence Heinsenberg principle) can be derived from the properties that is assumed that the space has (isotropy and such). And this sounds much more closer to the "real truth" than the axiom of the conmutator (However, perhaps we found that the son of QM that perhaps solves Planck length & GR problems take as more fundamental Heinsenberg principle than those properties of space, I am just saying that it does not sound likely -to me, just to me-)
And there are more examples. Bhobba told me here that Born rule can be derived through Gleasons theorems (I knew a similar derivation of Saunders). The use of complex numbers can be derived assuming that nature is continuous...
I mean, it seems (again, to me, just to me) that every abstract axiom in QM expressed as a mathematical relation can be expressed as a theorem assuming some previous axioms that sound (as always, to me, just to me) "more physical".
And here I arrive to Linearity. Is there any other axiom that you know that will sound more physical to me (perhaps you ask me how would you know what sounds more physical to me? I just think you know) and through which linearity can be demostrated?

Thanks to all for keep on trying to solve my doubts!
 
  • #32
Gleason's theorem is a great theorem, but in what sense does it explain the Born rule? It doesn't explain why one would need probability.
 
  • #33
Gleason's theorem is a great theorem, but in what sense does it explain the Born rule? It doesn't explain why one would need probability

I don't know, Bhobba told me that and I believed and repeated it. I only read Saunders paper and I know that that paper indeed proves Born Rule from previous and reasonable axioms. Here is the post of Bhobba, perhaps I did not understand correctly:

If you are worried about the Born Rule - don't be. Its not a separate assumption - a very famous (but surprisingly not as well known as it should be) theorem called Gleasons theorem guarantees it:
http://kof.physto.se/theses/helena-master.pdf

Nevertheless, it is just an example. I don't want to miss the aim of this post which is (after a lot of very useful answers from all of you that cleared up a lot of mess in my mind) is to look for an argument to support the linearity hipothesis (if it is the case that there indeed is an argument out there).
 
Last edited by a moderator:
  • #34
TrickyDicky said:
This is simply not true. Perhaps they haven't given you reasons to consider alternatives, but I assure you that is not the case for many theorists and experimentalists that have the physical (and mathematical) maturity to realize that the current QM formulations are only really good approximations in the linear limit.
Theories try to accommodate the world we live in, not the other way around (as you seem to imply), and experiments tell us that the world is not linear, so any linear theory that tries to model our world will logically be superseded by a non-linear one.
Uh, what? I'm just saying that the assumption of linearity gives us a theory that hasn't been contradicted by experiments. It's ridiculous to accuse me of lacking physical and mathematical maturity because of this. And your claim about what I "seem to imply" is even sillier.
 
  • #35
atyy said:
Gleason's theorem is a great theorem, but in what sense does it explain the Born rule? It doesn't explain why one would need probability.

the_pulp said:
I don't know, Bhobba told me that and I believed and repeated it.
Once we have decided to look for a theory in which the states assign probabilities to the Hilbert subspaces of some Hilbert space, Gleason's theorem tells us we don't have the freedom to choose the probability assignments, because there's only one meaningful way to do them. (Without such a theorem, different probability assignments would have defined different theories).

What atyy is getting at is that Gleason's theorem doesn't tell us why we should make that kind of probability assignments in the first place.
 
  • #36
the_pulp said:
[...] an argument to support the linearity hypothesis (if it is the case that there indeed is an argument out there).
In this context, it is eye-opening to study Ballentine section 7.1 in which he derives the well-known half-integral angular momentum spectrum from nothing more than the SO(3) generators, their commutation relations, and the assumption that they are represented as Hermitian operators on an abstract Hilbert space.

AFAIK, there is no other method of deriving the angular momentum spectrum that does not involve use of such a linear Hilbert space.

Though this is not a conclusive "it-can't-be-anything-else" argument, it does set quite a high bar that alternative approaches must clear.
 
  • #37
strangerep said:
In this context, it is eye-opening to study Ballentine section 7.1 in which he derives the well-known half-integral angular momentum spectrum from nothing more than the SO(3) generators, their commutation relations, and the assumption that they are represented as Hermitian operators on an abstract Hilbert space.
Maybe I'm misinterpreting the intent of your statement, but it doesn't sound right.

In the proof idea as I know it, you don't presuppose a representation of SU(2) at all.

Instead, the argument proceeds (roughly) by looking at all representations, and invoking the fact that any element of the spectrum appears as an eigenvalue in some representation.

_____________________________________________On an unrelated note, while I mentioned I find the use of Hilbert spaces (and the linearity therein) as merely a matter of mathematical technique, I do find the fact the observable algebra is an algebra to be curious. Why do expressions like PX and P+X make sense??
 
  • #38
Fredrik said:
What atyy is getting at is that Gleason's theorem doesn't tell us why we should make that kind of probability assignments in the first place.

The way I see it is you have two choices - either a deterministic or statistical theory. The deterministic case however is contained in the statistical one - it simply means the probabilities are 0 or 1. What Gleasons Theorem shows is you can't assign 0 and 1 only to the subspaces of a vector space thus ruling out the deterministic case. The Kochen-Specker Theorem does as well but it really is a simple corollary to Gleasons Theroem.

The issue with Glesasons theorem is its hidden assumption of non contextuality - you assume the probably assignment does not depend on the way the rest of the vector space is partitioned off by a resolution of the identity. However this fits nicely in with my approach to QM which is based on invariance.

Thanks
Bill
 
Last edited:
  • #39
bhobba said:
The way I see it is you have two choices - either a deterministic or statistical theory. The deterministic case however is contained in the statistical one - it simply means the probabilities are 0 or 1. What Gleasons Theorem shows is you can't assign 0 and 1 only to the subspaces of a vector space thus ruling out the deterministic case. The Kochen-Specker Theorem does as well but it really is a simple corollary to Gleasons Theroem.

The issue with Glesasons theorem is its hidden assumption of non contextuality - you assume the probably assignment does not depend on the way the rest of the vector space is partitioned off by a resolution of the identity. However this fits nicely in with my approach to QM which is based on invariance.

Thanks
Bill

Does one have to assume that measurements are projective operations?
 
  • #40
atyy said:
Does one have to assume that measurements are projective operations?

No. The idea is up to a multiplicative constant (the superposition principle implies this - superimposing a state with itself gives the same state) you know the outcome of any observation is an element of the vector space so you are assigning probabilities to the projection operators in a resolution of the identity (the space generated by multiplying any element by a constant is a subspace so automatically defines a projection operator). Gleasons theorem shows the usual trace formula for probabilities is the only one that can be defined. As mentioned there is a hidden assumption - namely the probability does not depend on what resolution of the identity a projection operator is part of - which is pretty much what a vector space is all about anyway (ie the elements do not depend on the whatever basis you chose) but is nonetheless an assumption.

You then associate each projection operator with a real number so a hermitian operator defines a resolution of the identity as the eigenvectors with the eigenvalues being the value associated with each projection operator.

Thanks
Bill
 
Last edited:
  • #41
Hurkyl said:
Maybe I'm misinterpreting the intent of your statement, but it doesn't sound right.
Do you have a copy of Ballentine at hand? I believe what I said does indeed correspond to what he does.

In the proof idea as I know it, you don't presuppose a representation of SU(2) at all.

Instead, the argument proceeds (roughly) by looking at all representations, and invoking the fact that any element of the spectrum appears as an eigenvalue in some representation.
I haven't seen it done with that emphasis. Can you give me a (readable) reference?

On an unrelated note, while I mentioned I find the use of Hilbert spaces (and the linearity therein) as merely a matter of mathematical technique, I do find the fact the observable algebra is an algebra to be curious. Why do expressions like PX and P+X make sense??
Maybe because the symmetry/dynamical groups tend to be Lie groups, and physical representations are classified via Casimirs thereof? (Also, continuity and differentiability, etc, imply certain properties for the generators.)

BTW, if the same question is posed in the classical regime, we have things like ##J = X \times P##, and also ##H = P^2 + X^2##, etc. Is this curious or boring? :-)
 
  • #42
  • #43
strangerep said:
No. POVM's are a more general approach:

http://en.wikipedia.org/wiki/POVM

Indeed it is and the modern proof of Gleasons Theorem makes use of them instead of resolutions of the identity - but POVM's are derivable from projections of higher dimensional resolutions of the identity via Newmarks Theorem.

Thanks
Bill
 
  • #44
Helloooo! I opened this thread because I was asking about linearity!

And here I arrive to Linearity. Is there any other axiom that you know that will sound more physical to me (perhaps you ask me how would you know what sounds more physical to me? I just think you know) and through which linearity can be demostrated?

(Nevertheless, pretty interesting the talk about Gleasons Theorems)
 
  • #45
A question relating linearity and measurements:

Compare the classical and Schroedinger wave equations. Both have linear solution spaces. However, a solution of the classical equation is not considered a sate, because it contains only information about position, not velocity, and both can be measured. However, a solution of the Schroedinger equation is considered a state. Is this because the quantum notion of position or velocity is different from the classical one?
 
  • #46
atyy said:
A question relating linearity and measurements:

Compare the classical and Schroedinger wave equations. Both have linear solution spaces. However, a solution of the classical equation is not considered a sate, because it contains only information about position, not velocity, and both can be measured. However, a solution of the Schroedinger equation is considered a state. Is this because the quantum notion of position or velocity is different from the classical one?

I don't think so. The classical wave equation is second order w.r.t. time, the solution plus its time derivative is the state. The state should be something that determines the future evolution of the system and in the classical case the solution of the equation (at a given time) is not enough for initial conditions.
 
  • #47
strangerep said:
I haven't seen it done with that emphasis. Can you give me a (readable) reference?
Unfortunately, it's not something I've actually worked through. Here is wikipedia's page on the topic. The result they state near the bottom is what I remember -- the fact you derive is not a theorem about the behavior of SU(2) in a particular representation: instead it is a complete classification of all irreducible finite-dimensional representations.

Mulling it over, I think the statement you are referring to (I don't have the text) is effectively equivalent, just phrased differently.



BTW, if the same question is posed in the classical regime, we have things like ##J = X \times P##, and also ##H = P^2 + X^2##, etc. Is this curious or boring? :-)
Boring. On any particular state, P and X are definite numbers, and so P^2+ X^2 makes sense in the classical regime: "Measure position and momentum, square them, then add them".

I can dramatically point out the issue with spin states. If X and Y are spin about the X and Y axis, if we try to think of the spectrum of an operator as being the possible outcomes of a hypothetical measurement, and we try to interpret X+Y in the same way as I mentioned classically, we run into the problem that the interpretation says 1 + 1 = \sqrt{2}! (or more accurately, if we add either of \pm 1 and either of \pm 1, we get either of \pm \sqrt{2})
 
  • #48
Hurkyl said:
Mulling it over, I think the statement you are referring to (I don't have the text) is effectively equivalent, just phrased differently.
After a night's sleep, I now get what you were previously saying. A more careful revision of what I said should include some caveats about superselection sectors for different Casimir values.
I can dramatically point out the issue with spin states. If X and Y are spin about the X and Y axis, if we try to think of the spectrum of an operator as being the possible outcomes of a hypothetical measurement, and we try to interpret X+Y in the same way as I mentioned classically, we run into the problem that the interpretation says 1 + 1 = \sqrt{2}! (or more accurately, if we add either of \pm 1 and either of \pm 1, we get either of \pm \sqrt{2})
I wonder whether one hits the same issue if the quantum case were constructed using POVMs corresponding to spin-coherent states. It's been a while since I looked at these and it's too early in the morning right now... :-)
 
  • #49
bhobba said:
[...] POVM's are derivable from projections of higher dimensional resolutions of the identity via Newmarks Theorem.
I guess you meant "Neumark", or "Naimark"? For other readers, this is also mentioned in the Wiki page on POVMs previously linked. More specifically:
http://en.wikipedia.org/wiki/Neumark's_dilation_theorem

BTW, how does this work in the case of unbounded operators and continuous spectra? The construction of a "higher dimensional" Hilbert space seems like it might be a bit tricky in that context, and the Wiki page seems to deal only with bounded operators.
 
Last edited:
  • #50
the_pulp said:
Helloooo! I opened this thread because I was asking about linearity!
Sorry if you've received the impression that your thread was being hijacked. The posts were indeed mostly relevant to your topic, but clearly that was not obvious. [Although... perhaps the stuff on Naimark's theorem should be moved to a different thread.]

Did you get (or study) the point I was trying to make in my earlier reference (post #36) to Ballentine's derivation of quantum angular momentum spectra?
 
Last edited:
Back
Top