Everybody says that if you have two states |0> and |1> then their superposition is a linear combination a|0> + b|1>. But you also have freedom to choose a basis, and you can choose one in which this combination is a basis vector, so in this new basis it becomes just c|1'>, apparently losing the indication of superposition. This would suggest (I am sure wrongly!) that superposition is just a coordinate effect, i.e. "not real". I am sure there is rational discussion of this point somewhere, but I haven't seen it.
Usually, superpostions are with respect to a basis of eigenstates of a particular observable. Changing the observable definitely changes the superposition. I think this is a bit psychological - in our minds, we tend to associate "real" properties with eigenstates, e.g., the particle is in a state that is a bit of "over here" and a bit of "over there". Penrose, in section 30.12 of his Road to Reality, writes about some problems he has had with the situation that you have brought up.
I agree with you, that is, I think that in QM, linear superposition is an artifact of the mathematics, not a part of the physics. There are a number of reasons for believing this. First, QM is not a truly linear theory. An example of a linear theory is E&M. If you solve Maxwell's equations you get a bunch of E&M fields. Tripling those fields gives you a solution that physically corresponds to a situation that is distinct from the one you started out with in that the fields are three times stronger. But tripling the wave function of a solution to Schroedinger's equation does not produce a new wave function that corresponds to a particle that is three times more intense. It just gives you a wave function for the same thing you started out with. Second, there are alternative formulations of QM that do not possess linear superposition. For example, any state vector wave function can be converted to a density matrix form that still allows the calculation of any physical result, but one can not take the linear superposition of two density matrices and and get a new density matrix. The reason is that instead of being linear, the density matrix forms are bilinear. At the heart of the density matrix formalism is the idempotency equation: [tex]\rho_u\;\rho_u = \rho_u .[/tex] This equation is the simplest non linear equation you can write down. And the fact that this is at the heart of this formulation of QM suggests to me that the essence of QM is non linearity, not linearity. Now when you write the idempotency relation in the state vector formalism it becomes a normalization condition: [tex]\rho_u = |u\rangle\langle u|, [/tex] [tex]\langle u| u\rangle = 1 \to \rho_u\;\rho_u = \rho_u,[/tex] but if you look at it the other way around, assuming that it is the density matrices that are fundamental, then the central equation is one that is non linear. Linear tools are much easier to build, so naturally we usually use QM in its linear form. But forcing linearity onto a non linear world can make for some confusion. For example, when one rotates a spinor by using spinors, one finds that spinors take a multiplier of -1 when rotated through an angle of 2 pi. Since we use spinors to represent electrons, the popular belief is that rotating an electron by 2 pi causes it to become something different than what it was before. But when one works in the density matrix formulation, there is no multiplication by -1. In fact, the usual complex phase ambiguity of state vectors is completely absent from the density matrix formulation. Anyway, of course this is one of the reasons I am trying to popularize the density matrix formulation of QM. I started a website on this, but it still has a long way to go. But it does include a "wiki" where people are invited to make contributions, comments, etc. http://www.DensityMatrix.com Carl
There are of course no states with an objective label "superposition" and other states with a label "no superposition". Take the Euclidean plane. The statement of superposition is that given two different lines through the origin, we can make new lines by superposing their unit vectors (and hence fill in the entire plane). But there are of course no special lines which are "superposed lines" and others which are "not in superposition". The point is simply that, given two lines, you fill in the entire plane. And given two states, you fill in an entire projective complex plane. This is the whole idea of the superposition principle. Now, it might be (and this happens) that superpositions of known states generate other states which are also known. For instance, you could say that states of a particle with given positions are "known states", which we label by |q1>, |q2> ... |qn> ... where q1, q2 ... qn are postulated to be possible position states. The thing that the superposition principle tells you, is that you cannot just make that list and stop, because all the linear combinations are ALSO states. Maybe you know them, maybe not. So it could be, say, that the superposition of 3 |q1> and 2 i|q2> just yields another |q5>. How exactly this web is supposed to get together is entirely open to specification. The above doesn't happen, in fact: no superposition of two "position states" gives you a new "position state" (although the superposition principle doesn't forbid you to do so). BUT, something of the kind DOES happen: a superposition of different position states, Int(dq Exp( i p q) |q>) produces another state, |p>, which is this time a momentum state. So certain superpositions of postulated states give you other existing states you knew already. However, MANY, MANY more states are usually postulated to be produced. Again, this is entirely up to the specific quantum model you introduce. You're only supposed to say what are INDEPENDENT states. But you could think of a "quantum theory" where there are only 4 independent position states, namely |1x>, |1y> and |1z> and |1r>. You could then *postulate* that a |1x> + b |1y> + c |1z> + d |1r> gives you another position state in space corresponding to x = a/r, y = b/r, z = c/r for x,y,z real ; I'm just inventing something. It is entirely up to you to determine what are independent states, and what are the relationships between known states, and superpositions of other known states. Usually, we start by saying that all the points corresponding to a classical configuration space correspond to independent states, and that their Fourier combination yields states that correspond to the configuration space of conjugate momenta, but we're absolutely not obliged to do so. However, the superposition principle simply forbids you NOT to consider the superposition of two known states. It must be something: or another known state, or a new state. And if we say that there corresponds an independent state to each point of configuration space, then we create A LOT of new states, of which the momentum states are only a tiny, tiny fraction. But the most crying consequence of the superposition principle is that (even approximately) DISCRETE states cannot exist. Discrete states, such as a yes/no outcome of observation. You now have to say what are the superposition states of "yes" and "no". And this is the origin of the entire measurement problem: The fact that you cannot split the entire state space in a part where we have "yes" and another part where we have "no" (which is no problem in classical configuration space). So when we talk about "superposed states", although the concept has, by itself, no absolute meaning, we usually mean, a state which is not a "known state" such as a position state, or a momentum state, or a "yes" state, or a "no" state.
Lots of deleted stuff As always, an excellent post by Patrick. You know, I have always thought that the transition from CM and QM could be very succinctly summarized as " replace OR by AND"!! In CM, we have "the particle is in that state OR in that state", whereas in QM, we have, in general, "the particle is in that state AND in that state ,etc". Of course, "replacing OR by AND" does not give all one needs to get QM (once one has AND, one must specify what this implies for measurements and so on). But I thought that it encapsulates nicely one key difference between CM and QM. So here's my motto for the day: In going from CM to QM, replace OR by AND" Pat
Yes, but that is because the state space is not hilbert space, but a projective slice of it of course. The space of rays, which is in 1-1 mapping of course with the projectors on the rays, which are isomorphic to the density matrices you're so fond of. Yes, that's simply because it is the same point in the projective space. The superposition principle still hides in the density matrix formalism, because the density matrices are MATRICES and not diagonal matrices. But it is the superposition principle applied in a projective space, that's all. Well, in the same way as you can do 3-dim Euclidean geometry in a 4-dim projective plane, and the Euclidean linearity becomes a bit more convoluted in the projective plane, in the same way the density matrix formalism hides a bit the linear structure. You could turn it upside down: the fact that we can do all the calculations you need to do with density matrices and with their appearance in non-linear combinations, in a Hilbert space formalism, means that the underlying theory is linear ! Yes, and all this because we are in fact working in a projective space, and not in a linear space which is associated to it. Yes, but that's again because of the confusion of the Hilbert space with the true statespace, which is projective. Quantum states = rays in hilbert space = points in the projective hilbert space = projectors = equipotent density matrices. Nevertheless, there are some issues with the purely projective jargon also. How do you distinguish between the state |a> + |b> and the state |a> - |b>, when starting from |a><a| and |b><b| ? Clearly |a> + |b> and |a>-|b> are physically distinct states. So to make the projectors of |a> + |b> and |a> - |b> you have not much choice but to go through the Hilbert vector formalism, and simply write: (|a> + |b> ) (<a| + <b|) = |a><a| + |b><b| + |a><b| + |b><a| But to go directly from "particle goes through slit a and b at once, in phase" to the above expression, is far from clear, although the |a> + |b> is really evident. So it is not always the clearest way to express stuff in the density matrix form. Sometimes it is, sometimes it isn't.
It is the measuring apparatus which "selects" the basis. Therefore, superposition is a relative concept, that is to say, relative to the type of measurement which one wishes to perform.
Well, I am somewhat clearer, but I don't think eigenstates are the only case where superposition becomes significant. As I posted, everybody (i.e. textbook writers) wants to sell you superposition of states, or kets in the Dirac algebra. Now I fully agree that a vector in a vector space, even an infinite dimensional one, has a uniqueness independent of choice of basis. That is not my issue. My issue concerns linear combinations of vectors. These are not unique indeoendently of basis. In order to get a "thing" in for example plane geometry you have to take a pair of vectors as a thing, and consider its multiples, which gives you a tessalation of the plane that is independent of basis, But nobody seems to define superposition this way, and you don't get that arbitrarily in higher dimension, we are talking about Clifford Algebras and that is a non-trivial extension. And in the fraught subject of Bell tests, it seems to me that superposition is key, and there is this basis problem and I wonder if these things are connected. Like "the state formalism is static and we assume unitary evolution in order to preserves state values, and then when we consider the case of two particles that share a state, and a superposed one at that, and we also have enough control over localityto separate those two particles to spacelike relationship without breaking or projecting the state, then we have problems because we can't just appeal to eigenvalues, since the state is for the moment uncollapsed, but the state of superposition is very delicate in that it can be destroyed wrt one particle without (?) being destroyed at the other - or the one observer sees the state at the other as a superposition of the two possible actions the other could perform, THEN (got to the end eventually) the rather unsolid conceptual state of "superposition" could be a factor in describing what goes on", or am I missing something?
This is a very good point, and an opportunity to show what is going on from the density matrix point of view. The problem with doing it with spinors as you have written, is that |a><b| is not a pure density matrix, and so is not very well defined in the pure density matrix formalism. In particular, the value of that operator has an arbitrary phase that depends on how one gets from a density matrix |a><a| to a spinor |a>. The density matrix equivalent of the addition of two spinors should be the combination of two pure density matrices so as to obtain a new density matrix that is also pure, but this is possible to do in a natural way as I now show. To see how to do this, it is useful to know how one obtains things that act like spinors inside the pure density matrix formalism. Choose an arbitrary state |0><0|, which will be treated sort of like a vacuum state. The only restriction is that neither <0|a> nor <0|b> can be zero. Define: |a> = |a><a| |0><0| <a| = |0><0| |a><a| and similarly for |b>. I leave it as an exercise for the reader to show that the above do act like spinors in that one can compute matrix elements, etc. This relies on the fact that products of the form |0><0| ... |0><0| can always be reduced to a complex multiple of |0><0|, which is how complex numbers end up as matrix elements in the density matrix formalism without the need for bringing them in by defining a complex valued trace function. Mathematically, this has to do with a property of primitive idempotents and ideals in an algebra. Now add two states as follows: |a> + |b> = |a+b> = (|a><a| + |b><b|) |0><0| and similarly for <a+b|. Note that the RHS is defined only in terms of pure density matrices. To get the pure density matrix for |a+b><a+b|, simply multiply the bra and ket forms together: |a+b><a+b| = (|a><a|+|b><b|) |0><0| |0><0| (|a><a| + |b><b|) = (|a><a|+|b><b|) |0><0| (|a><a| + |b><b|) which is defined again, only with density matrices. Note that the presence of the |0><0| is required for this, you cannot simply add pure density matrices to get another pure density matix. In the above I assumed that |0><0| was normalized in getting to the second line. As when one adds spinors, the sum is no longer normalized. I leave it as an exercise for the reader to verify that it is, in fact, possible to normalize the above by multiplying by a constant to turn it into a pure density matrix. (More generally, let M and N be arbitrary matrices, and let O be a primitive idempotent. Prove that (MON)(MON) = k(MON) where k is a complex number.) Now the above method of adding pure density matrices appears complicated, but it should be remembered that adding states is something that happens only in the mathematics. The real world doesn't need to add states together. Also note that the above method of adding pure density matrices gives a result that depends on the choice of |0><0|. A little consideration will show that the same thing happens in spinors. For example, do a spinor sum calculation for two randomly chosen spinors in two different choices of representation (for example, once with the usual S_z basis, and once with some other basis such as S_x) and you will find that the results differ. This all gets back to the arbitrary phases that spinors carry around. I really can't give this much algebra in a post without referencing my Quixotic attempt to reformulate quantum mechanics in pure density matrix form at the website http://www.DensityMatrix.com It's my belief that to truly understand the spinor formalism you must understand the pure density matrix formalism, and when one does this, the structure of the elementary particles will become a lot easier to compute. Basically, the spinor formalism makes it easy to compute interference as a function of spatial position, as in the 2-slit experiment. The reason it is so easy to compute with spinors is because spinors can be added together. But elementary particles need to be understood geometrically, something that is simpler in pure density matrix formalism than in spinor formalism. For example, the solution to the eigenvector problem for the state corresponding to spin+1/2 in the unit vector [tex](u_x,u_y,u_z)[/tex] direction is trivial in pure density matrix form: [tex]|u><u| = (1 + u_x\sigma_x + u_y\sigma_y + u_z\sigma_z)/2[/tex] where the factor of 2 normalizes the state. With pure density matrices, the normalization is unique, with spinors it is not. Compare the simplicity of the above to the spinor calculation, where one finds an eigenvector for the operator [tex] u_x\sigma_x + u_y\sigma_y + u_z\sigma_z[/tex] and then normalizes it by computing the square root of its length. Pity the student. And pity the instructor who has to account for the arbitrary complex phase when grading what the students turn in. The simplicity of the above calculation also shows the superiority of the pure density matrix formulation from a geometric point of view. All the elements on the RHS are defined geometrically, and can be treated as vectors. And if you want a spinor solution from the pure density matrix calculation, simply compute the above in your favorite representation, and take any non zero column from the pure density matrix result. For example: [tex] |u> = |u><u| |0><0|[/tex] where <0| is (1,0) or (0,1), whichever gives a non zero <0|u>. Much simpler. For an example with real numbers, see: http://en.wikipedia.org/wiki/Spinor#Example:_Spinors_of_the_Pauli_Spin_Matrices When one expands from spin to isospin the above simplicity follows along. If you want to follow Einstein's path of understanding physics from geometry, density matrices are the way to go. Carl
I am missing your point. Going back to your opening post, suppose we have two physical states |0> and |1>. Then (in the absence of superselection rules), it follows that a|0>+b|1> (normalized) is also a physical state. Let us call this new state |1'>. Is |1'> a state of superposition? Well ... relative to the basis {|0>,|1>}, yes it is. But relative to the basis {|â”´1'>,|1'>}, it is not. Now, let's say |0> and |1> happen to be eigenstates of energy. The following questions might then emerge: What makes my energy measuring apparatus "select" the {|0>,|1>} basis and not some other basis? Does there exist a measuring apparatus for which |1'> is one of the associated eigenstates? Am I getting any closer to your point? If you are wondering why I am 'hooked' eigenstates, that is because you have expressed your query like this: Superposition is as "real" as the basis relative to which you express the initial vector. And a basis of eigenstates of a physical observable is as "real" as a basis can get. Am I getting any closer to your point?
That IS my point! Superposition is realtive to a basis, i.e only defined up to a basis. And at the state level it appears we can choose any basis we like. CarlB: You know I am interested in your density matrix formulation, and your demonstration of it post #10 was fascinating, but I have some way to go to mastering this in detail. The idea of a geometric approach to spinors is very attractive.
When I woke up this morning I realized that some proofs I left as exercises for the reader might not be as obvious as I felt when I wrote them. So let me add the following calculation that will give the essence of how it is that the complex numbers appear naturally in the density matrix formulation (that is, without any need to bring in a trace). I will give this example in the Pauli matrices, but the same thing applies in any Clifford algebra, and particularly the Dirac gamma matrices. Let O be a primitive idempotent Pauli matrix, that is, a spin projection operator or a density matrix operator. Such matrices have a trace of 1 and square to themselves. In the right basis, one can always write O as: [tex]O = \left(\begin{array}{cc}1&0\\0&0\end{array}\right).[/tex] For example, if O happens to be the spin projection operator for spin-1/2 in the +z direction, then the usual Pauli matrices give the above as O. Now consider what happens when I have a product of a series of arbitrary 2x2 matrices that begins and ends with O. For example, OAO, where "A" is an arbitrary matrix (or a product or sum or whatever of arbitrary matrices): [tex]OAO = \left(\begin{array}{cc}1&0\\0&0\end{array}\right) \left(\begin{array}{cc}a&b\\c&d\end{array}\right) \left(\begin{array}{cc}1&0\\0&0\end{array}\right)[/tex] [tex]= \left(\begin{array}{cc}a&0\\0&0\end{array}\right) = a \left(\begin{array}{cc}1&0\\0&0\end{array}\right) = a\;O[/tex] Thus the arbitrary matrix A got converted by the Os to just a complex multiple of O. Consequently, any products of matrices of the sort OAO and OBO commute. For example, let OBO = bO, then compute the product: [tex]OAO = aO, OBO = bO[/tex] then [tex](OAO)(OBO) = (aO)(bO) = (ab)OO = ab\;O = (OBO)(OAO)[/tex] In other words, such products, that is, things like OAO, make up a copy of the complex numbers. Therefore, if you make your spinors out to be |A> = AO, and <B| = OB, then products of the form <A|M|B> will be OAMBO, and will be a complex multiple of O, and therefore will act just like complex numbers. This is how you get spinors, OA, from pure density matrices, A, along with a "vacuum choice" O. Now the method of adding pure density matrices requires that one show that XOX is a multiple of a pure density matrix if O is also a pure density matrix. One shows this as follows: [tex](XOX)(XOX) = X(OXXO)X = X(xO)X = x(XOX),[/tex] so [tex](XOX/x) (XOX/x) = (XOX)(XOX)/x^2 = x(XOX)/x^2 = (XOX/x)[/tex] and therefore [tex](XOX/x)[/tex] is an idempotent, as claimed. The fact that [tex]XOX/x[/tex] is a "primitive" idempotent is easy to see, for example, by computing the trace: [tex]\tr(XOX/x) = \tr(XOOX)/x = \tr(OXXO)/x = x/x = 1,[/tex] where use of the fact that [tex]OO=O[/tex] has been made, along with the fact that any idempotent matrix that has a trace of 1 is a primitive idempotent (an example of such a primitive idempotent is the ((10)(00)) matrix used as O in the first half, note tr(O) = 1). The math is a little perpendicular to what one is used to with spinors. You can see that I've used the trace function only to prove the mathematical facts about products of matrices. For the actual computation with these types of "spinors", there is no trace needed. What I'm providing here is a method of calculating with pure density matrices that requires no use of the trace function, but instead acts just like the usual complex numbers. In short, this is a very simple and yet fully geometric basis for the foundations of quantum mechanics. The paper I'm working on that will explain this is going to be about 20 pages long. It begins with very simple stuff like I've shown here, and then extends the theory to internal symmetries like isospin. It ends with a derivation of the masses of the leptons ala http://www.brannenworks.com/MASSES2.pdf , which is intended to show that not only does pure density matrix theory provide an alternative and very elegant geometric foundation for quantum mechanics, but that it also gives insight into the substructure of the elementary particles and allows computations that are impossible in the spinor theory. I hate it when my papers are ignored because they cannot be understood. I try to write so that anyone with a BS in Physics can understand but I don't always get there. Your comment that what I wrote above went by you was very useful because it tells me that I have to dig a little deeper. Carl
Eh, yes, coordinates are relative to a basis, if that's what you want to say The statement "having more than one coordinate different from 0" is a coordinate-system dependent notion of course. But that's not the point. The point is: pick a basis, pick no matter what basis. Then all possible coordinates can occur. *this* statement is not coordinate-system dependent, and that's the superposition principle. Like in the Euclidean plane. Saying that "a point has both a non-zero x and y coordinate" is a coordinate-system dependent notion, because it is sufficient to draw a new X axis through said point. However, saying that "in any coordinate system, there are points with all possible x and y coordinates" is however, a coordinate system independent notion (and the definition of a plane (or a 2-dim manifold at least)). It's not the specific values of the coordinates that counts, it is the fact that all of them can occur that counts. The fact that we "fill the plane". It's a mind-boggling notion in quantum theory, and the origin of all its strangeness. Not the fact that in a specific basis, a specific state has two or more non-zero coefficients, but rather that in ANY basis, there ARE states with all possible sets of coefficients.
You need to explain to me how this "all possible coordinate" idea works, since it seems to violate the principles of linear spaces. Is the state a ray in a projective linear space or isn't it? That would be mind-boggling all right but you haven't sold me that it is a coherent description of what happens. In particular if this is a geometric fact then why is there a basis problem in explaining projection?
Maybe I'm missing something, but I thought this was an expected thing in QM. Talking a particle localised at the point [tex]x_1[/tex], the state is [tex]\left| x_1 \rangle[/tex]. However in the momentum eigenket basis it's: [tex]\left| x_1 \rangle = C_1 \left| p_1 \rangle + C_2 \left| p_2 \rangle +.......[/tex] In other words a particle can be in a definite state of one quantity, but in superposition with respect to another quantity. From the example above a particle of definite position is in superpositional state with respect to momentum. It follows from their non-commutability that this is so. Superposition is relative to the quantity. Maybe I'm mad, but isn't this mentioned in most QM texts.
Dick, knowing you a bit I know also that I don't have to tell you what a linear space is, so I'm totally confused about what you are saying. In a linear space, take any coordinate system, where a point has an n-tuple of real or complex numbers as coordinates. Well, all n-tuples are possible coordinates. You cannot say that the point with coordinates (1,0,0) exists, and the point with coordinates (0,1,0) exists, but that the point with coordinates (2.5 , 3.1, 0) doesn't exist, because in a linear space, it does. Well, yes, the correct state is a POINT in the projective space, or a ray in the corresponding linear space (the projective space being the set of rays of the linear space - maybe we use different terminology ?). Well, in order for projection to occur, one has to say WHAT PROJECTORS to use. Now, in standard QM, these are *introduced ad hoc*. They are associated to the spectral decomposition of "the hermitean operator corresponding to the measurement", but "the hermitean operator" is nothing else but the set of projectors of its spectral composition. So these projectors are introduced by hand in standard QM. To each "measurement" corresponds a "set of projectors" introduced by hand. And if you don't do this ad hoc introduction for a "measurement" then one needs to find a dynamical scheme that could introduce a "preferred set of projectors". As I understand, the programme of environmental decoherence is exactly that: finding a dynamical justification for the dynamical appearance of a "preferred set of projectors". The "basis problem" you mention is only a problem in standard QM, when one realises that the set of projectors introduced by the "measurement operator" is entirely arbitrary and plugged in by hand: indeed, why not take another set then ? And the results are different. This is the "preferred basis problem" (at least, I understand it that way). There is, in standard QM, no *dynamical* justification for the choice of one set of projectors over another. And of course "projection" is not invariant under a change of a set of projectors. (but, that said, even the projection itself is not dynamically justified, so...)
I hope I haven't mistaken the point of the thread. Lets say we have a ket of the form: [tex]\left| \psi \rangle = C_1 \left| A \rangle + C_2 \left| B \rangle[/tex] If [tex]\left| A \rangle[/tex] and [tex]\left| B \rangle[/tex] are eigenkets of a Quantity [tex]Z[/tex]. So the system represtned by [tex]\left| \psi \rangle[/tex] doesn't posses a definite value for [tex]Z[/tex]. However let's say that [tex]\left| \psi \rangle[/tex] is an eigenket of another quantity [tex]Y[/tex]. In the [tex]Y[/tex] basis [tex]\left| \psi \rangle[/tex] isn't in superposition as it is one of the basis vectors of the [tex]Y[/tex] basis. So the system has a definite value of [tex]Y[/tex]. If something is in a superposition or not depends on the quantity(which defines a basis), as just because a system has definite value of one quantity doesn't mean it has a definite value for another quantity.
When we talk about eigenkets, we are referring to an interaction. The eigenspectrum actually belongs to the operator, not to the ket it acts on. But is it ever meaningful to speak of superposition in contexts other than interaction?
I knew this wasn't as simple as I thought. Do you mean is it meaningful to speak of superposition without measurement? i.e. Is the interaction you are talking about the Interaction between the Quantum Mechanical particle and the Experimental Apparatus which the Operator represents?