Register to reply 
Superposition and Kets 
Share this thread: 
#1
Jun2906, 08:27 AM

Emeritus
PF Gold
P: 8,147

Everybody says that if you have two states 0> and 1> then their superposition is a linear combination a0> + b1>. But you also have freedom to choose a basis, and you can choose one in which this combination is a basis vector, so in this new basis it becomes just c1'>, apparently losing the indication of superposition. This would suggest (I am sure wrongly!) that superposition is just a coordinate effect, i.e. "not real".
I am sure there is rational discussion of this point somewhere, but I haven't seen it. 


#2
Jun2906, 08:38 AM

Mentor
P: 6,232

Penrose, in section 30.12 of his Road to Reality, writes about some problems he has had with the situation that you have brought up. 


#3
Jun2906, 09:45 AM

Sci Advisor
HW Helper
P: 1,204

There are a number of reasons for believing this. First, QM is not a truly linear theory. An example of a linear theory is E&M. If you solve Maxwell's equations you get a bunch of E&M fields. Tripling those fields gives you a solution that physically corresponds to a situation that is distinct from the one you started out with in that the fields are three times stronger. But tripling the wave function of a solution to Schroedinger's equation does not produce a new wave function that corresponds to a particle that is three times more intense. It just gives you a wave function for the same thing you started out with. Second, there are alternative formulations of QM that do not possess linear superposition. For example, any state vector wave function can be converted to a density matrix form that still allows the calculation of any physical result, but one can not take the linear superposition of two density matrices and and get a new density matrix. The reason is that instead of being linear, the density matrix forms are bilinear. At the heart of the density matrix formalism is the idempotency equation: [tex]\rho_u\;\rho_u = \rho_u .[/tex] This equation is the simplest non linear equation you can write down. And the fact that this is at the heart of this formulation of QM suggests to me that the essence of QM is non linearity, not linearity. Now when you write the idempotency relation in the state vector formalism it becomes a normalization condition: [tex]\rho_u = u\rangle\langle u, [/tex] [tex]\langle u u\rangle = 1 \to \rho_u\;\rho_u = \rho_u,[/tex] but if you look at it the other way around, assuming that it is the density matrices that are fundamental, then the central equation is one that is non linear. Linear tools are much easier to build, so naturally we usually use QM in its linear form. But forcing linearity onto a non linear world can make for some confusion. For example, when one rotates a spinor by using spinors, one finds that spinors take a multiplier of 1 when rotated through an angle of 2 pi. Since we use spinors to represent electrons, the popular belief is that rotating an electron by 2 pi causes it to become something different than what it was before. But when one works in the density matrix formulation, there is no multiplication by 1. In fact, the usual complex phase ambiguity of state vectors is completely absent from the density matrix formulation. Anyway, of course this is one of the reasons I am trying to popularize the density matrix formulation of QM. I started a website on this, but it still has a long way to go. But it does include a "wiki" where people are invited to make contributions, comments, etc. http://www.DensityMatrix.com Carl 


#4
Jun2906, 10:13 AM

Emeritus
Sci Advisor
PF Gold
P: 6,236

Superposition and Kets
There are of course no states with an objective label "superposition" and other states with a label "no superposition". Take the Euclidean plane. The statement of superposition is that given two different lines through the origin, we can make new lines by superposing their unit vectors (and hence fill in the entire plane). But there are of course no special lines which are "superposed lines" and others which are "not in superposition". The point is simply that, given two lines, you fill in the entire plane. And given two states, you fill in an entire projective complex plane. This is the whole idea of the superposition principle. Now, it might be (and this happens) that superpositions of known states generate other states which are also known. For instance, you could say that states of a particle with given positions are "known states", which we label by q1>, q2> ... qn> ... where q1, q2 ... qn are postulated to be possible position states. The thing that the superposition principle tells you, is that you cannot just make that list and stop, because all the linear combinations are ALSO states. Maybe you know them, maybe not. So it could be, say, that the superposition of 3 q1> and 2 iq2> just yields another q5>. How exactly this web is supposed to get together is entirely open to specification. The above doesn't happen, in fact: no superposition of two "position states" gives you a new "position state" (although the superposition principle doesn't forbid you to do so). BUT, something of the kind DOES happen: a superposition of different position states, Int(dq Exp( i p q) q>) produces another state, p>, which is this time a momentum state. So certain superpositions of postulated states give you other existing states you knew already. However, MANY, MANY more states are usually postulated to be produced. Again, this is entirely up to the specific quantum model you introduce. You're only supposed to say what are INDEPENDENT states. But you could think of a "quantum theory" where there are only 4 independent position states, namely 1x>, 1y> and 1z> and 1r>. You could then *postulate* that a 1x> + b 1y> + c 1z> + d 1r> gives you another position state in space corresponding to x = a/r, y = b/r, z = c/r for x,y,z real ; I'm just inventing something. It is entirely up to you to determine what are independent states, and what are the relationships between known states, and superpositions of other known states. Usually, we start by saying that all the points corresponding to a classical configuration space correspond to independent states, and that their Fourier combination yields states that correspond to the configuration space of conjugate momenta, but we're absolutely not obliged to do so. However, the superposition principle simply forbids you NOT to consider the superposition of two known states. It must be something: or another known state, or a new state. And if we say that there corresponds an independent state to each point of configuration space, then we create A LOT of new states, of which the momentum states are only a tiny, tiny fraction. But the most crying consequence of the superposition principle is that (even approximately) DISCRETE states cannot exist. Discrete states, such as a yes/no outcome of observation. You now have to say what are the superposition states of "yes" and "no". And this is the origin of the entire measurement problem: The fact that you cannot split the entire state space in a part where we have "yes" and another part where we have "no" (which is no problem in classical configuration space). So when we talk about "superposed states", although the concept has, by itself, no absolute meaning, we usually mean, a state which is not a "known state" such as a position state, or a momentum state, or a "yes" state, or a "no" state. 


#5
Jun2906, 10:28 AM

Sci Advisor
HW Helper
P: 2,887

Lots of deleted stuff
You know, I have always thought that the transition from CM and QM could be very succinctly summarized as " replace OR by AND"!! In CM, we have "the particle is in that state OR in that state", whereas in QM, we have, in general, "the particle is in that state AND in that state ,etc". Of course, "replacing OR by AND" does not give all one needs to get QM (once one has AND, one must specify what this implies for measurements and so on). But I thought that it encapsulates nicely one key difference between CM and QM. So here's my motto for the day: In going from CM to QM, replace OR by AND" Pat 


#6
Jun2906, 10:28 AM

Emeritus
Sci Advisor
PF Gold
P: 6,236

You could turn it upside down: the fact that we can do all the calculations you need to do with density matrices and with their appearance in nonlinear combinations, in a Hilbert space formalism, means that the underlying theory is linear ! Nevertheless, there are some issues with the purely projective jargon also. How do you distinguish between the state a> + b> and the state a>  b>, when starting from a><a and b><b ? Clearly a> + b> and a>b> are physically distinct states. So to make the projectors of a> + b> and a>  b> you have not much choice but to go through the Hilbert vector formalism, and simply write: (a> + b> ) (<a + <b) = a><a + b><b + a><b + b><a But to go directly from "particle goes through slit a and b at once, in phase" to the above expression, is far from clear, although the a> + b> is really evident. So it is not always the clearest way to express stuff in the density matrix form. Sometimes it is, sometimes it isn't. 


#7
Jun2906, 11:10 AM

P: 29

[QUOTE=selfAdjoint]Everybody says that if you have two states 0> and 1> then their superposition is a linear combination a0> + b1>. But you also have freedom to choose a basis, and you can choose one in which this combination is a basis vector, so in this new basis it becomes just c1'>, apparently losing the indication of superposition. This would suggest (I am sure wrongly!) that superposition is just a coordinate effect, i.e. "not real".[\QUOTE]
What then happens if we go to an infinite dimensional Hilbert space? For example, consider the Harmonic Oscillator, there it is possible to express a number state $\vert n \rangle$ as a superposition of momentum eigenstates $\vert p \rangle$ and vice versa  however, only the first superposition is well defined since the latter gives a vector which is not square integral and cannot represent a state. 


#8
Jun2906, 12:52 PM

P: 252




#9
Jun2906, 05:44 PM

Emeritus
PF Gold
P: 8,147

Well, I am somewhat clearer, but I don't think eigenstates are the only case where superposition becomes significant. As I posted, everybody (i.e. textbook writers) wants to sell you superposition of states, or kets in the Dirac algebra.
Now I fully agree that a vector in a vector space, even an infinite dimensional one, has a uniqueness independent of choice of basis. That is not my issue. My issue concerns linear combinations of vectors. These are not unique indeoendently of basis. In order to get a "thing" in for example plane geometry you have to take a pair of vectors as a thing, and consider its multiples, which gives you a tessalation of the plane that is independent of basis, But nobody seems to define superposition this way, and you don't get that arbitrarily in higher dimension, we are talking about Clifford Algebras and that is a nontrivial extension. And in the fraught subject of Bell tests, it seems to me that superposition is key, and there is this basis problem and I wonder if these things are connected. Like "the state formalism is static and we assume unitary evolution in order to preserves state values, and then when we consider the case of two particles that share a state, and a superposed one at that, and we also have enough control over localityto separate those two particles to spacelike relationship without breaking or projecting the state, then we have problems because we can't just appeal to eigenvalues, since the state is for the moment uncollapsed, but the state of superposition is very delicate in that it can be destroyed wrt one particle without (?) being destroyed at the other  or the one observer sees the state at the other as a superposition of the two possible actions the other could perform, THEN (got to the end eventually) the rather unsolid conceptual state of "superposition" could be a factor in describing what goes on", or am I missing something? 


#10
Jun2906, 06:34 PM

Sci Advisor
HW Helper
P: 1,204

The density matrix equivalent of the addition of two spinors should be the combination of two pure density matrices so as to obtain a new density matrix that is also pure, but this is possible to do in a natural way as I now show. To see how to do this, it is useful to know how one obtains things that act like spinors inside the pure density matrix formalism. Choose an arbitrary state 0><0, which will be treated sort of like a vacuum state. The only restriction is that neither <0a> nor <0b> can be zero. Define: a> = a><a 0><0 <a = 0><0 a><a and similarly for b>. I leave it as an exercise for the reader to show that the above do act like spinors in that one can compute matrix elements, etc. This relies on the fact that products of the form 0><0 ... 0><0 can always be reduced to a complex multiple of 0><0, which is how complex numbers end up as matrix elements in the density matrix formalism without the need for bringing them in by defining a complex valued trace function. Mathematically, this has to do with a property of primitive idempotents and ideals in an algebra. Now add two states as follows: a> + b> = a+b> = (a><a + b><b) 0><0 and similarly for <a+b. Note that the RHS is defined only in terms of pure density matrices. To get the pure density matrix for a+b><a+b, simply multiply the bra and ket forms together: a+b><a+b = (a><a+b><b) 0><0 0><0 (a><a + b><b) = (a><a+b><b) 0><0 (a><a + b><b) which is defined again, only with density matrices. Note that the presence of the 0><0 is required for this, you cannot simply add pure density matrices to get another pure density matix. In the above I assumed that 0><0 was normalized in getting to the second line. As when one adds spinors, the sum is no longer normalized. I leave it as an exercise for the reader to verify that it is, in fact, possible to normalize the above by multiplying by a constant to turn it into a pure density matrix. (More generally, let M and N be arbitrary matrices, and let O be a primitive idempotent. Prove that (MON)(MON) = k(MON) where k is a complex number.) Now the above method of adding pure density matrices appears complicated, but it should be remembered that adding states is something that happens only in the mathematics. The real world doesn't need to add states together. Also note that the above method of adding pure density matrices gives a result that depends on the choice of 0><0. A little consideration will show that the same thing happens in spinors. For example, do a spinor sum calculation for two randomly chosen spinors in two different choices of representation (for example, once with the usual S_z basis, and once with some other basis such as S_x) and you will find that the results differ. This all gets back to the arbitrary phases that spinors carry around. I really can't give this much algebra in a post without referencing my Quixotic attempt to reformulate quantum mechanics in pure density matrix form at the website http://www.DensityMatrix.com It's my belief that to truly understand the spinor formalism you must understand the pure density matrix formalism, and when one does this, the structure of the elementary particles will become a lot easier to compute. Basically, the spinor formalism makes it easy to compute interference as a function of spatial position, as in the 2slit experiment. The reason it is so easy to compute with spinors is because spinors can be added together. But elementary particles need to be understood geometrically, something that is simpler in pure density matrix formalism than in spinor formalism. For example, the solution to the eigenvector problem for the state corresponding to spin+1/2 in the unit vector [tex](u_x,u_y,u_z)[/tex] direction is trivial in pure density matrix form: [tex]u><u = (1 + u_x\sigma_x + u_y\sigma_y + u_z\sigma_z)/2[/tex] where the factor of 2 normalizes the state. With pure density matrices, the normalization is unique, with spinors it is not. Compare the simplicity of the above to the spinor calculation, where one finds an eigenvector for the operator [tex] u_x\sigma_x + u_y\sigma_y + u_z\sigma_z[/tex] and then normalizes it by computing the square root of its length. Pity the student. And pity the instructor who has to account for the arbitrary complex phase when grading what the students turn in. The simplicity of the above calculation also shows the superiority of the pure density matrix formulation from a geometric point of view. All the elements on the RHS are defined geometrically, and can be treated as vectors. And if you want a spinor solution from the pure density matrix calculation, simply compute the above in your favorite representation, and take any non zero column from the pure density matrix result. For example: [tex] u> = u><u 0><0[/tex] where <0 is (1,0) or (0,1), whichever gives a non zero <0u>. Much simpler. For an example with real numbers, see: http://en.wikipedia.org/wiki/Spinor#..._Spin_Matrices When one expands from spin to isospin the above simplicity follows along. If you want to follow Einstein's path of understanding physics from geometry, density matrices are the way to go. Carl 


#11
Jun2906, 08:37 PM

P: 252

Going back to your opening post, suppose we have two physical states 0> and 1>. Then (in the absence of superselection rules), it follows that a0>+b1> (normalized) is also a physical state. Let us call this new state 1'>. Is 1'> a state of superposition? Well ... relative to the basis {0>,1>}, yes it is. But relative to the basis {┴1'>,1'>}, it is not. Now, let's say 0> and 1> happen to be eigenstates of energy. The following questions might then emerge: What makes my energy measuring apparatus "select" the {0>,1>} basis and not some other basis? Does there exist a measuring apparatus for which 1'> is one of the associated eigenstates? Am I getting any closer to your point? If you are wondering why I am 'hooked' eigenstates, that is because you have expressed your query like this: Am I getting any closer to your point? 


#12
Jun3006, 07:51 AM

Emeritus
PF Gold
P: 8,147

CarlB: You know I am interested in your density matrix formulation, and your demonstration of it post #10 was fascinating, but I have some way to go to mastering this in detail. The idea of a geometric approach to spinors is very attractive. 


#13
Jun3006, 10:45 AM

Sci Advisor
HW Helper
P: 1,204

I will give this example in the Pauli matrices, but the same thing applies in any Clifford algebra, and particularly the Dirac gamma matrices. Let O be a primitive idempotent Pauli matrix, that is, a spin projection operator or a density matrix operator. Such matrices have a trace of 1 and square to themselves. In the right basis, one can always write O as: [tex]O = \left(\begin{array}{cc}1&0\\0&0\end{array}\right).[/tex] For example, if O happens to be the spin projection operator for spin1/2 in the +z direction, then the usual Pauli matrices give the above as O. Now consider what happens when I have a product of a series of arbitrary 2x2 matrices that begins and ends with O. For example, OAO, where "A" is an arbitrary matrix (or a product or sum or whatever of arbitrary matrices): [tex]OAO = \left(\begin{array}{cc}1&0\\0&0\end{array}\right) \left(\begin{array}{cc}a&b\\c&d\end{array}\right) \left(\begin{array}{cc}1&0\\0&0\end{array}\right)[/tex] [tex]= \left(\begin{array}{cc}a&0\\0&0\end{array}\right) = a \left(\begin{array}{cc}1&0\\0&0\end{array}\right) = a\;O[/tex] Thus the arbitrary matrix A got converted by the Os to just a complex multiple of O. Consequently, any products of matrices of the sort OAO and OBO commute. For example, let OBO = bO, then compute the product: [tex]OAO = aO, OBO = bO[/tex] then [tex](OAO)(OBO) = (aO)(bO) = (ab)OO = ab\;O = (OBO)(OAO)[/tex] In other words, such products, that is, things like OAO, make up a copy of the complex numbers. Therefore, if you make your spinors out to be A> = AO, and <B = OB, then products of the form <AMB> will be OAMBO, and will be a complex multiple of O, and therefore will act just like complex numbers. This is how you get spinors, OA, from pure density matrices, A, along with a "vacuum choice" O. Now the method of adding pure density matrices requires that one show that XOX is a multiple of a pure density matrix if O is also a pure density matrix. One shows this as follows: [tex](XOX)(XOX) = X(OXXO)X = X(xO)X = x(XOX),[/tex] so [tex](XOX/x) (XOX/x) = (XOX)(XOX)/x^2 = x(XOX)/x^2 = (XOX/x)[/tex] and therefore [tex](XOX/x)[/tex] is an idempotent, as claimed. The fact that [tex]XOX/x[/tex] is a "primitive" idempotent is easy to see, for example, by computing the trace: [tex]\tr(XOX/x) = \tr(XOOX)/x = \tr(OXXO)/x = x/x = 1,[/tex] where use of the fact that [tex]OO=O[/tex] has been made, along with the fact that any idempotent matrix that has a trace of 1 is a primitive idempotent (an example of such a primitive idempotent is the ((10)(00)) matrix used as O in the first half, note tr(O) = 1). The math is a little perpendicular to what one is used to with spinors. You can see that I've used the trace function only to prove the mathematical facts about products of matrices. For the actual computation with these types of "spinors", there is no trace needed. What I'm providing here is a method of calculating with pure density matrices that requires no use of the trace function, but instead acts just like the usual complex numbers. In short, this is a very simple and yet fully geometric basis for the foundations of quantum mechanics. The paper I'm working on that will explain this is going to be about 20 pages long. It begins with very simple stuff like I've shown here, and then extends the theory to internal symmetries like isospin. It ends with a derivation of the masses of the leptons ala http://www.brannenworks.com/MASSES2.pdf , which is intended to show that not only does pure density matrix theory provide an alternative and very elegant geometric foundation for quantum mechanics, but that it also gives insight into the substructure of the elementary particles and allows computations that are impossible in the spinor theory. I hate it when my papers are ignored because they cannot be understood. I try to write so that anyone with a BS in Physics can understand but I don't always get there. Your comment that what I wrote above went by you was very useful because it tells me that I have to dig a little deeper. Carl 


#14
Jul106, 12:54 AM

Emeritus
Sci Advisor
PF Gold
P: 6,236

Like in the Euclidean plane. Saying that "a point has both a nonzero x and y coordinate" is a coordinatesystem dependent notion, because it is sufficient to draw a new X axis through said point. However, saying that "in any coordinate system, there are points with all possible x and y coordinates" is however, a coordinate system independent notion (and the definition of a plane (or a 2dim manifold at least)). It's not the specific values of the coordinates that counts, it is the fact that all of them can occur that counts. The fact that we "fill the plane". It's a mindboggling notion in quantum theory, and the origin of all its strangeness. Not the fact that in a specific basis, a specific state has two or more nonzero coefficients, but rather that in ANY basis, there ARE states with all possible sets of coefficients. 


#15
Jul106, 06:58 AM

Emeritus
PF Gold
P: 8,147




#16
Jul106, 07:23 AM

P: 89

Talking a particle localised at the point [tex]x_1[/tex], the state is [tex]\left x_1 \rangle[/tex]. However in the momentum eigenket basis it's: [tex]\left x_1 \rangle = C_1 \left p_1 \rangle + C_2 \left p_2 \rangle +.......[/tex] In other words a particle can be in a definite state of one quantity, but in superposition with respect to another quantity. From the example above a particle of definite position is in superpositional state with respect to momentum. It follows from their noncommutability that this is so. Superposition is relative to the quantity. Maybe I'm mad, but isn't this mentioned in most QM texts. 


#17
Jul106, 07:52 AM

Emeritus
Sci Advisor
PF Gold
P: 6,236

In a linear space, take any coordinate system, where a point has an ntuple of real or complex numbers as coordinates. Well, all ntuples are possible coordinates. You cannot say that the point with coordinates (1,0,0) exists, and the point with coordinates (0,1,0) exists, but that the point with coordinates (2.5 , 3.1, 0) doesn't exist, because in a linear space, it does. And if you don't do this ad hoc introduction for a "measurement" then one needs to find a dynamical scheme that could introduce a "preferred set of projectors". As I understand, the programme of environmental decoherence is exactly that: finding a dynamical justification for the dynamical appearance of a "preferred set of projectors". The "basis problem" you mention is only a problem in standard QM, when one realises that the set of projectors introduced by the "measurement operator" is entirely arbitrary and plugged in by hand: indeed, why not take another set then ? And the results are different. This is the "preferred basis problem" (at least, I understand it that way). There is, in standard QM, no *dynamical* justification for the choice of one set of projectors over another. And of course "projection" is not invariant under a change of a set of projectors. (but, that said, even the projection itself is not dynamically justified, so...) 


#18
Jul106, 08:08 AM

P: 89

Lets say we have a ket of the form: [tex]\left \psi \rangle = C_1 \left A \rangle + C_2 \left B \rangle[/tex] If [tex]\left A \rangle[/tex] and [tex]\left B \rangle[/tex] are eigenkets of a Quantity [tex]Z[/tex]. So the system represtned by [tex]\left \psi \rangle[/tex] doesn't posses a definite value for [tex]Z[/tex]. However let's say that [tex]\left \psi \rangle[/tex] is an eigenket of another quantity [tex]Y[/tex]. In the [tex]Y[/tex] basis [tex]\left \psi \rangle[/tex] isn't in superposition as it is one of the basis vectors of the [tex]Y[/tex] basis. So the system has a definite value of [tex]Y[/tex]. If something is in a superposition or not depends on the quantity(which defines a basis), as just because a system has definite value of one quantity doesn't mean it has a definite value for another quantity. 


Register to reply 
Related Discussions  
Bras and Kets and Tensors  Quantum Physics  154  
Bra kets  Advanced Physics Homework  3  
Proving the superposition of initial conditions gives superposition of motion  Introductory Physics Homework  2  
Kets, vectors.  Introductory Physics Homework  9  
Bras and Kets  Introductory Physics Homework  4 