Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Meaning of operators for observables.

  1. May 5, 2004 #1
    I understand that observables in quantum mechanics are represented by hermitian operators, which are converted into a matrix when expressed in a particular basis. I also understand that when the basis used is an eigenbasis of the operator, the matrix becomes diagonal, having the eigenvalues as diagonal elements, and those eigenvalues are the possible results of the measurement.
    But I still think I am missing something, that I fail to visualize the connection between the math and the physical process.
    I know of three things you can do to a state:
    (1) You can unitarily change it, like under a force or just letting time pass
    (2) You can make a measurement, which would change the state by eliminating some of its components.
    (3) You can make a change of basis, which would leave the state alone and just change your description of it by rotating the frame of reference (in Hilbert space).
    But I dont understand how the observable operators relate to the three points above.
    I can see how the operator can, in its eigenbasis and when braketed with the state vector, give a weighted sum of all the possible measurement resuts, which would correspond to the expectation value. But that's the most I can make out of it. And, again, I think I am missing somethhing.
    If a state (pure) is represented by a state vector, and operators operate on state vectors to yield other state vectors; what is that operators corresponding to measurables do to the state vector? It can't be a change of basis because this is accomplished by a unitary operator. It can't be a real measurement because the operator is not projecting the state vector on a basis, unless it is already in its eigenbasis. And the meaning of these operators become even less clear when they appear multiplied times each other like in the conmutators.
    I have started to explore this problem with spin operators, but an elementary treatment has not answered my questions. I realize that after reading a lot about angular momentum I might get to the point I understand. But I thought there should be some explanation of this independent of the particular states under consideration.
    I am really confused. All I have read doesn't seem to clarify this puzzle.
    I'll appreciate your help guys,
  2. jcsd
  3. May 5, 2004 #2
    The description of an observable as a Hermitian operator is not strictly necessary from the point of view of measurement theory alone. One could simply describe a measurement by a collection of projectors onto the relevant eigenstates instead.

    However, the formalism is useful because of the dual role that observables play in quantum mechanics. They can be things that are measured, but also they can be things that are conserved by the unitary part of the dynamics. The latter means that if a system starts in a particular eigenstate of an operator representing a conserved quantity, then it will remain in that eigenstate. It is easy to check that if H is a Hermitian operator, then e^(iH) is a unitary operator that conserves H. Hence, the Hermitian operators are useful because they can be used to describe both the unitary part of quantum dynamics and the measurement part.
  4. May 5, 2004 #3
    Thank you sylboy. You mention a dual role of operators. Probably I can explore both roles separate, as otherwise it may become overwhelming.
    With respect to the role they play in the time evolution, I have read (but never quite understood) that hermitian operators conserve probability. I think I need to study this more, but I think I would rather start with the other role you mention. (measurement)
    After reading your response, I was thinking the following: To give you the different eigenvalues of a particular possible measurement, the operator's matrix needs to be diagonalized. When you diagonalize the operator, you are changing the basis, the same unitary operator used to diagonalize the observables operator is used on the state, to express it in the new basis. If the operator't matrix in a certain basis is not diagonal, the matrix itself contains all the information needed to construct the unitary matrix needed to diagonalize it. So, could we say that the function of the observable's operator is to provide all the information needed for the basis change, even if the operator used to accomplish that change of basis is not the measurable's operator itself but the unitary operator that is built based on information provided by the observable's matrix?
    Maybe I am going in the wrong direction, but I am trying to build a conceptual scafold that can make sense to me, and which I can use as a frame of reference to later interpret and fill-in all the details. I can also understand that a task like this is somewhat personal, with each person having different preferences, depending on their personalities. I understand most people may prefer to just work with the operators, without worrying too much about their meaning, and later their meaning becomes apparent through their use. I can understand how that may work for them. I have never considered myself very good and efficient at heavy symbol manipulation, and for this reason I rely more on deeper conceptual understanding and sometimes on paradigms.
    Sylboy, thanks again, and I hope you'll give me a few more insights.
  5. May 6, 2004 #4
    From the perspective of measurement theory, a Hermitian operator is just a convenient summary of everything you could want to know about the outcome statistics of that measurement on any state. Another convenient summary would be the set of projection operators associated with the eigenspaces, or the set of eigenstates if the operator is non-degenerate. The latter is often more convenient for discussing the foundational issues of quantum mechanics, but in practice physicists find the former more convenient because of the close relationship to conserved physical quantities.

    It is true that the operator contains all the information about the appropriate change of basis you need to make to figure out the outcome probabilities and post-measurement state. I don't know why you are trying to attach such a great significance to this piece of linear algebra, but you may choose to understand it in this way if you wish.

    It is not true that Hermitian operators conserve probability, but unitary ones certainly do. This just means that the inner product of the state with itself remains constant. The interpretation of "conserve probability" in this statement is just that the probability that the system is in some state is always 1. To see why this is true you just have to note that the question "Is the system in some state?" corresponds to the identity operator, so it always has the value 1 under unitary evolution.
  6. May 6, 2004 #5


    User Avatar
    Homework Helper

    The measurement "rotates" the state. Perhaps it is bad to think of the measurement as "eliminating components."

    You lost me on this one. The basis is a mathematical abstraction. I don't see how that can get in the way of a physical measurement.

    The theory/formalism is much more clear when demonstrated in the abstract. The three things you mentioned above do not depend on a particular state. Are you looking to add as many more things to this list as possible?

    I think you may be blurring the critical distinction between an operator and a matrix. There is much overlap in the sense that you can express (and even think of) the operator in terms of a matrix and you can use a matrix to find the various properties of an operator. But the operator itself is not strictly a matrix. As I understand it, the fundamental distinction between an operator and a scalar dynamical variable is twofold: 1) the commutation issue and 2) the fact that an operator is awaiting a state on which to operate whereas a scalar dynamical variable stands alone. A matrix also stands alone (can be treated without multiplying it on a state), so it may not be providing you with a comprehensive picture. In otherwords, when you are considering an observable Ω as a matrix, then you have already, whether explicitly or not, performed the operations <φi|Ω|φj>.
    Last edited: May 6, 2004
  7. May 6, 2004 #6
    I guess every person has a different way of learning, and I can understand how we can get impatient when other people try to grasp the problem from a different angle than the one we have chosen. I appreciate your comments though, and I have printed a copy to go over them as I keep thinking about this issue.
    With respect to your first comment:
    Rotating the state can give you the state expressed in the eigenbasis of the measurement operator, but if the state corresponds to a superposition of different eigenstates, the rotation doesn't choose one of the eigenstates as the result of your measurement. I cound understand how rotating the states (in Hilbert space) can be part of the measurement, but I would bet it can't be the whole story. You would need a projector or something to complete the measurement. A rotation of the basis would not change the state, but a measurement does, as it chooses one eigenstate and sets the others to zero. Even if we ignore the selecting of a particular outcome and consider the rotation part, a unitary operator allows you to rotate the state, so why would you need the hermitian operator at all?
    I am not really questioning . I know that there must be a reason, it is just that I think I am missing something.
    I just meant that the operator operating on a state does not give you the eigenvectors or the eigenvalues unless the state is already expressed in the eigenbasis of the operator. Do I make any nore sense? In other words: when you find the eigenbasis of the operator, and diagonalize it using the needed unitary operators, only then you get to see the eigenvalues in the diagonal, and only when the state vector is expressed in this eigenbasis do the coefficients relate to the probability of getting the eigenvalues as results of the measurement.
    I agree with you. I would like to understand this issue without having to refer to a particular system. But as I have failed, I am trying to use a two state system as an example. If I can understand these concepts without getting into angular momentum, etc. I would prefer to do that. I am trying to keep the "list" as short as possible.
    With respect to your comment about my blurring the distiction about an operator and a matrix, I understand everything you are saying and totally agree. Maybe I didn't express myself clearly, but I understand the distinction. I understand that the operator can stand on its own and not become a matrix untill you express it in some basis by bracketing with the basis vectors. That's what I meant by "the matrix of an operator". Like the Pauli matrices would be the matrices of the spin operators expressed in the z (+/-) basis.
    Thanks for your input Turin.
    Last edited: May 6, 2004
  8. May 6, 2004 #7


    User Avatar
    Homework Helper

    This statement makes me wonder if we might be talking about two different things. I am picturing an arbitrary state and an arbitrary measurement. Lets say we have an operatoer, &Omega;, that has an eigenbasis {|&omega;i>}, and let it operate on a state, |&psi;>. Mathematically, we would get:

    &Omega;|&psi;> = &Sigma;&omega;i|&omega;i>.

    Physically, however, we would get:

    &Omega;|&psi;> ---> &omega;i|&omega;i>

    with a corresponding probability:


    which is generally nonunity. Since it is nonunity, the rotation is nondeterministic. Is that what you mean - how can the selection of eigenstate be nondeterministic?

    I didn't mean a rotation of the basis, I meant a rotation of the state. You need the hermiticity as opposed to the unitarity because 1) the formalism requires real eigenvalues for observables, and 2) the value of the resultant eigenvalue (usually) contains all of the experimental information, which would be trivial if it were always unity. Therefore, you cannot get a meaningful formalism based on eigenvalued results by only allowing hermitian unitary transformations. I hope I'm not mixing stuff up; it's been over a year since my last QM instruction.

    My point was that the operation inherently returns an eigenvalue of the operator, according to the formalism. Whether or not it looks that way when you write it down on paper is a different issue. The physical interpretation of the QM state is that its "length" is arbitrary and its "direction" alone completely determines the physical state. When the operator acts on the state (when you "look" at the state), it necessarily returns an eigenvalue and points the state in the corresponding direction.

    I believe you mean that you are trying to use a 2 dimensional Hilbert space. The system can be in a superposition of two states, the superposition being its own distinct state, and therefore allowing an infinite number of distinct states. (I'm assuming you're talking about spin 1/2 particles.)
  9. May 7, 2004 #8
    Your answer gave me a lot to think about.
    I was as a matter of fact missinterpreting you with respect to the rotation. I was talking about a rotation of the basis and you were talking about a rotation of the state vector. I think this is crucial and deserves some discussion.
    A rotation of the state vector with respect to a particular basis would imply a chance in the coefficients (components) of the vector. This would represent a change in the physical object described by the vector.
    What I mean by a rotation of the basis may a sloppy way of expressing it. (although I have seen it in some books). What I actually mean is this:
    Every time you describe the vector you use some basis. So you could say that you are using a reference frame that coincides with (is aligned with) that basis. A change to another basis could be seen as a rotation of your reference frame so that it now aligns with the new basis. The position of the state vector here remains fixed with respect to any of the possible bases (with respect to the Hilbert space). It is just your point of view that changes. So the physical situation does not change, only your description of it changes.

    With respect to the equations you list, the first one is:
    Are you sure this is correct? Wouldn't it be:
    Ω|ψ> = Σi ωi |ψi> or...
    Ω|ψ> = Σi ωi |ωi><ωi|ψ> or...
    Ω|ψ> = Σi ωi <ωi|ψ>|ωi> or...
    Ω|ψ> = Σi ωi ci |ωi>

    Some other questions (if you don't mind):
    (1) to write these equations I had to cut and paste from yours. How do you make greek letters and subscripts on this forum?
    (2) I noticed that your notation is very similar to Shankar. Is that the book you used?.
    (3) I would assume you studied Quantum Mechanics in school (graduate studies?) What are you doing these days? anything related to physics?

    I am myself presently unemployed. I used to work as an engineer but have a hard time finding a job now. I got a bachelor's degree in physics which doesn't help. I was planning to get my master's in physics because that is what I like, but now I am reconsidering. Even if I don't continue my formal studies in physics, I'll keep studying QM on my own.
    Turin: thanks for your replies, they are helping me a lot in thinking about these issues in QM. Things are not clear for me yet but I feel I am making some progress.
    Thanks again,
  10. May 7, 2004 #9
    It should be

    Ω|ψ> = Σi ωi |ωi><ωi|ψ> or...
    Ω|ψ> = Σi ωi <ωi|ψ>|ωi>

    presuming that ωi are the eigenvalues of Ω and |ωi> are the corresponding eigenstates. I dont know how to wite the greek symbols (I also used cut and paste, but I do know how to do this:

    [tex]\Omega | \psi \rangle = \sum_i \omega_i \langle \omega_i | \psi \rangle | \omega_i \rangle [/tex]

    which is much cooler anyway (click on the equation to find out how).

    I think Turin means that on measurement a state is rotated onto the eigenstate corresponding to the outcome that was obtained. The unitary operator that does this depends on both the initial state |ψ> and the outcome |ωi>. It is not unique, since there are many unitary operators that do this.

    I think it is better to regard the state update rule as a projection rather than a rotation becasuse the projection operators are unique and do not depend on |ψ>. This is a better way of thinking about it if you ever study the more advanced aspects of measurement theory, where state vectors are generalized into things called density operators and the notion of a projective measurement is generalized into something called a POVM (Positive Operator Valued Measure). Not all physicists have to study this, but if you are interested in the foundations of quantum theory or quantum computing and information theory then you will eventually come across it. I think quantum statistical mechanics uses this formalism as well.
  11. May 7, 2004 #10


    User Avatar
    Homework Helper


    Basically. I imagine that one could probably make an argument for an exception, but I don't imagine that it would be altogether enlightening for our purposes.

    I'm not sure I agree with this. In some ways, it does seem like a rotation, but I imagine that it could also involve reflections and discontinuities.

    Do you mean as a consequence of a measurement. If I'm not mistaken, this is not consistent with experimental result. For instance, if the measurement were only to change the basis but leave the physical state alone, then I don't think there would be any non-trivial commutation of observables.

    You are quite right. How sloppy (incorrect) of me. I wouldn't use your first suggestion, though.

    Concatenate these strings: "&" "alpha" ";"

    Yes. Everything about QM that I have learned in a formal setting has been from Shankar.

    Yes. I graduate tomorrow as a matter of fact. I have no job prospects, so I am stressing more than anything else. We seem to have striking similarities in background and contemporary situation.
  12. May 7, 2004 #11


    User Avatar
    Homework Helper

    Can you give an example of

    U1|&psi;> = |&phi;> and U2|&psi;> = |&phi;>

    such that U1 /= U2.

    For some reason, I just can't think of how to show that.

    How would this then demonstrate the statistical dependence on |&psi;>?
  13. May 7, 2004 #12
    To specify a d-dimensional unitary operator, you have to give its action on at least d-1 linearly independent vectors (the last one will be fixed by orthogonality relations). There are no examples in 2-dimensions, which may be why you had difficulty finding an example, but here is one in a real 3-d space:

    Suppose |1> and |2> and |3> are orthonormal basis vectors and we want to find a unitary, U, such that

    [tex] U |1 \rangle = \frac{1}{\sqrt{2}} ( |1 \rangle + |2 \rangle) [/tex]

    We could choose

    [tex] U |2 \rangle = \frac{1}{\sqrt{2}} ( |1 \rangle - |2 \rangle) [/tex]
    [tex] U |3 \rangle = |3 \rangle [/tex]


    [tex] U |2 \rangle = |3 \rangle [/tex]
    [tex] U |3 \rangle = \frac{1}{\sqrt{2}} ( |1 \rangle - |2 \rangle) [/tex]

    is an equally good choice.

    If you are concerned that this is just a relabeling of the basis vectors then note that

    [tex] U |2 \rangle = \frac{1}{\sqrt{3}} ( |1 \rangle - |2 \rangle + |3\rangle) [/tex]
    [tex] U |3 \rangle = \frac{1}{\sqrt{6}} (|1 \rangle - |2 \rangle - 2 |3\rangle)[/tex]

    is also a good choice.

    I'm sure you can now come up with a continuous infitinty of other possibilities.

    The state update rule is

    [tex]|\psi \rangle \rightarrow \frac{\Pi | \psi \rangle}{|| \Pi | \psi \rangle ||}[/tex]

    where [tex]\Pi[/tex] is the projector onto the eigenspace corresponding to the eigenvalue obtained in the measurement. This clearly shows the dependence on the state, but the operator involved is not dependent on the state and is uniquely defined (modulo some mathematical technicalities that arise for continuous variables).
  14. May 7, 2004 #13
    Well guys, you gave me a lot to read. I'll have to take a break now and start working on something that perhaps can make me some money. I printed your posts and I'll read them tonight on the couch while my wife watches TV.
    I think I'll answer tonight or tomorrow morning.
  15. May 7, 2004 #14


    User Avatar
    Homework Helper

    Of course; that makes sense. I don't know why I couldn't think of that. :rolleyes:

    I'm still missing something. I don't see how this helps one understand the collapse. In fact, it seems more complicated than the observable operator formalism. If [tex]\Pi[/tex] does not depend on the state, then what would prevent one from specifying [tex]\Pi[/tex] to project [tex]|\psi\rangle[/tex] onto a state [tex]|\phi\rangle[/tex] for which [tex]\langle\phi|\psi\rangle = 0[/tex]?
  16. May 8, 2004 #15
    Slyboy: sorry for misspelling your name. It must be dyslexia. Other things came up and I was not able to take the time to think about your argument about the different unitary operators.
    States in Hilbert space could be visualized as points on a sphere. to one state, there corresponds only one point on the sphere. When we move the state vector, we are going prom point a to point b on the sphere. This motion can always be accomplished by a one of two rotations in the plane determined by both points and the origin. If you accomplish the change by a continuous motion or a sudden jump is not made explicit by the unitary operator.
    In 2-D real space, a rotation matrix takes you form point a to point b on the unit circle, but although we call the matrix a "rotation" matrix, that matrix does not tell you that you are smoothly moving the thing through all the intermediate angles. The rotation matrix could very well represent the point dissapearing form position "a" and appearing at position "b". In this case, or in the Hilbert space case, I wonder if the matrix for a rotation of 180 degrees and that for a reflection would look different.
    On the other hand, if what we are doing is changing basis, where each eigenvector is represented by a point on the sphere, as we are talking about many points, reflection would look different than rotation, but still the rotation matrix would not necessarily mean a smooth motion.
    Now, when I suggested viewing a basis change as a rotation, I was not trying to be precise. I was just trying to find some way to visualize the thing, even if later I have to replace that concept by a more precise one. I understand orthogonal matrices are not only rotation matrices but also reflection and inversion matrices, and the same must be true for unitary matrices I guess.
    I agree with you and I think you missinterpreted me. It was exactly my point that a change of basis alone can not represent a measurement.

    On some equations you posted previously you said: "mathematically it would be like this.." and "physically it would be like this..". Without getting into the details and more on a philosofical note, I think that what we are trying to do with the math is represent as close as possible what is happening physically. I do understand that some times we may use terms whose "physical existence" is questionable and in other cases there may be no good math to represent the physical process. But I would guess in most of quantum mechanics there is a good correspondence between the "physical" and the "mathematical", this as long as we consider things like a state as "physical". I am not trying here to get into deep philosofical discusion, all I mean is that I was somewhat suspicious about your assertion about something being one way "mathematically" and another way "physically".

    Slyboy and Turin,
    To summarize a little our discussion, I think we kind of agree on the following:
    (1) A measurement "could" be represented by a rotation of the state vector (which does not need to be continuous) to a position where it coincides with one of the eigenvectors of the measurement operator.
    (2) The previous representation may not be the best. A projection may be a better way to visualize it or to carry out the calculation.
    (3) If we have a measurement operator and a generic state vector, in order to carry out the multiplication you express both the operator and the vector in some basis. If the basis you chose is not an eigenbasis of the operator, when you multiply, you are not going to get each component of the state vector multiplied times the eigenvalue. In order to get that you have to make a change of basis to the eigenbasis of the operator. You achieve this change of basis using a unitary operator, which you apply to both the matrix and the vector (unless ine of them is already expressed in the eigenbasis). This change of basis does nothing to the physical state itself, it is only the description that changes. Once you have everything in the eigenbasis of the operator, (the matrix is now diagonal) then multiplying the measurement matrix times the state vector gives you a vector where each original component is now multiplied times the eigenvalue.
    Do we agree up to this point?

    Well even if this is correct, I fail to see the significance in this product. It appears that the change in basis was more crucial (it gave us the eigenvalues and the eigenvectors). But multiplying the matrix times the vector doesn't seem to accomplish much other than murking things up by mixing amplitudes with eigenvalues by multiplying them together.
    The only use I can see in this is to put a bra on the left and then get the expectation value.
    Someone may say that that's exactly what the operators are for, that thy give us a bridge between that classical quantities and the quantum world. I can understand that.
    On the other hand, looking at the commutation relations, it appears that the operators have a greater significance than this. Maybe what I am trying to visualize can not be visualized, but I think there should be some meaning to the operators even if we stay in the quantum world and not try to make comparisons with the classical. I understand the correspondence principle has been important, but probably that was just a crutch.
    So in summary, I take a state, I multiply it times a measurement matrix (yes I say matrix, not operator) and the matrix is not in the eigenbasis of the operator. What have I done to the state? If I multiply the state vector times two measurement matrices (like in a commutator), what have I really done?
    Last edited: May 9, 2004
  17. May 9, 2004 #16


    User Avatar
    Homework Helper

    alexepascual and slyboy,
    You have converted me. I don't know what I was thinking. I now believe that the exact arguments that I made in favor of the rotation picture actually serve to better promote the projection picture. (I'm still awaiting slyboy's reply with interest.)

    Perhaps I was unclear when I distinquished the "mathematical" quality from the "physical" quality. What I meant was:

    - when you "write it down" or "do the calculation" the writing contains a distribution and there is no way for the calculation to make a selection of a particular eigenvalue. The calculation can show a favored eigenvalue, but there is no way to be sure unless the state is prepared as an eigenstate.

    - when you "do the experiment" the "display on the experimental apparatus" will "show only one of the eigenvalues" and will not give a distribution. Of course, if you "do the experiment" repeatedly, the apparatus will eventually develop the distrubution and so "agree with the math."

    I didn't intend anything philosophical.

    I agree with (1) and (2). I still don't quite follow the point of (3). Maybe you could give an example. I think what is confusing me about your (3) is that I see no need to first express the state in some obscure basis and then subsequently transform to the eigenbasis of the operator. Furthermore, if the state is an eigenstate (of the operator), then it does not matter whether or not it is expressed in the eigenbasis (of the operator); the operator will still return the original state multiplied by the eigenvalue. If the state is not an eigenstate, it can still be decomposed into a superposition of eigenstates (without applying any unitary operators?) (See your post, post #8).
  18. May 9, 2004 #17
    When I said I was not intending a deep philosophical discussion, I was talking about my own argument. I didn't mean that your arguments appeared to be philosofical. Actually I do like to think and discuss the philosofical fouindations and implications of quantum mechanics, but here I was trying to not to fall into that temptation as I wanted to focus on the math and its relation to the physical entities it describes.
    Some of what you said in your last post is giving me food for though. I'll have to take some time to digest it.
    I also want to go over the Slyboy's discussion about the unitary operators ( I haven't had time yet).
  19. May 10, 2004 #18
    Hey guys !
    By the way, I know that any observable is represented by a hermitean operator.
    Now, is it true that any hermitean operator can be considered as representing an observable (?_?)
  20. May 10, 2004 #19
    I don't know. Maybe Slyboy or Turin can answer your question. At this point, as I am focussing on observables, I am not interested in knowing if Hermitian operators may represent other things. If on the other hand I found that they are applied outside of quantum mechanics and that their use in another science makes their meaning more clear, I might be interested in looking into that. But given the very special role played by imaginary numbers in quantum mechanics, I doubt it very much this comparison would yield anything fruitful.

    Your argument about the existence of many unitary operators that take you from one point to another is convincing. We could also think of many different rotations that can take you from one point to another on the surface of a sphere. But you could though choose a rotation that moves the point within a plane that includes the origin. This would eliminate the multiplicity of unitary operators. But in its application to QM. it's dependence on the state would probably allways be a problem as you pointed out.

    You say:
    To decompose the state into eigenstates, you first have to know what those igenstates are. Once you have the eigenstates, decomposing the state vector into the different eigenstates is equivalent to a change of basis. Now, maybe you say: No!, I start without any basis, I just have the abstract vector and then I express it in the eigenbasis of the operator!. OK, but I dont see in practice how you can describe a state without doing it in some basis. In paper you can write ψ but that doesn't mean much, it doesn't tell you anything about the particle until you give the components in some basis. If you ask the guy in the lab to give you a report on the state of the particle, he'll give you a list of complex numbers with units, where those units are base kets. He won't hand you a piece of paper with a letter ψ written on it.
    It is not that you start with some obscure basis. I think you start with some standard basis, maybe a basis that is an eigenbasis of a previous measurement, and then you have to look for the eigenbasis that corresponds to the present measurement or measurable. To know what the components of the state vector are in this eigenbasis, you have to change basis, which you do with a unitary operator in which each column is an eigenvector expressed in the old basis. When you write this base-change with kets, it may not be obvious that you have a unitary operator, but you could identify a ket-bra product as exactly that unitary base-change operator.
    I think I would have more comments, but is getting late and I have to go.
    Last edited: May 10, 2004
  21. May 10, 2004 #20


    User Avatar
    Homework Helper

    I would say that while ψ is said to be abstract mathematically, it seems that it is not really abstract experimentally. I suppose it could be thought of as a point in its own basis. But, I don't think I agree with a decomposition into eigenstates as necessarily the same as a change of basis. I definitely see the similarities, but I would need more convincing, or more ponderance, before I accepted that. I'll try to think it through here for the benefit of whomever:


    ψ( x ) = < x | ψ >
    g( k ) = < k | ψ >

    Eigenfunction expansion:

    ψ( x )
    = Σi φi( x ) ci
    = Σi < x | φi > < φi | ψ >

    Change of basis (i.e. IFT):

    ψ( x )
    = integralover k{ dk K(k,x) g( k ) }
    = integralover k{ dk < x | k > < k | ψ > }
    where K(k,x) is the Kernel of the transformation.

    I'm not getting anywhere with this. I'll have to think about it.
    Last edited: May 10, 2004
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook