Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Projector in QM

  1. Aug 5, 2004 #1
    I saw some discussions about projectors in some threads. Also, the projector is used in this book to define pure state but did not provide what is a projector.

    http://www.math.sunysb.edu/~leontak/book.pdf [Broken]

    Thru some math. check by assuming [tex] TR AP_\psi = (A (\psi) , \psi ) [/tex] ( In this book, that is the expectation vale for [tex] AP_\psi [/tex] ) , I got the answer by
    [tex] P_\psi ( e_n ) = \sum_i c_i \overline{c_n } e_i [/tex]
    if
    [tex] \psi= \sum_i c_i e_i [/tex]
    for an orthonormal basis [tex] { e_n } [/tex].

    That sounds to be a good one for it.

    Also, if
    [tex] \psi_1 = c_1 \psi + c_2 \bot \psi [/tex]
    then
    [tex] P_\psi ( \Psi_1) = c_1 \psi [/tex] , where
    [tex] ( \bot\psi, . \psi ) = 0 [/tex] .

    Is this right?
     
    Last edited by a moderator: May 1, 2017
  2. jcsd
  3. Aug 5, 2004 #2
    Definition: P is a "projector" if (and only if):

    (i) Pt = P ,

    and

    (ii) P has eigenvalues 0 and/or 1 .

    It then follows that P is a "projector" if, and only if, Pt = P and P2 = P.

    --------------------

    I'm not quite sure what you are asking here. But all of the equations are correct. However, the "Trace" equation is said to be "the expectation value for A when the system is in the state ψ", not "the expectation value for APψ".

    --------------------

    Yes.
     
    Last edited: Aug 5, 2004
  4. Aug 5, 2004 #3
    Eye,

    Thanks.

    The error you pointed out was my typo. I will think about the link between the EQ. I derived and your defintion.
     
  5. Aug 5, 2004 #4
    Eye,

    Shall it be [tex] \overline{P^t} = P [/tex]?

    Thanks
     
  6. Aug 5, 2004 #5
    By the way, this condition is also needed in my EQ:

    [tex] \sum_i c_i \overline{c_i } = 1 [/tex]

    Otherwise, [tex] P_\psi ( \psi ) = \sum_i c_i \overline{c_i } \psi [/tex]

    Or, in general, the EQ shall be:

    [tex] P_\psi ( e_n ) = ( 1 / \sum_j c_j \overline{c_j } ) \sum_i c_i \overline{c_n } e_i [/tex]
     
  7. Aug 5, 2004 #6
    Notation conversion between inner product and bra/ket for C and D belonging to the Hilbert Space.

    [tex] ( C , D ) = < D | C > = \sum_j c_j \overline{d_j } [/tex]

    if [tex] C = \sum_j c_j e_j and D = \sum_j d_j e_j [/tex]
     
  8. Aug 5, 2004 #7
    If by "t" you don't mean the "Hermitian transpose" (called the "adjoint"), then it shall.

    ------------------------

    Oh, I was assuming that. It means that the state ψ is "normalized" (i.e. to unity).


    Yes. BUT, then, for nontrivial Pψ (i.e. Pψ ≠ 0), it is not a projector (because the eigenvalue 1 has become Σi|ci|2).


    The factor in front should still be a 1 (not 1/Σj|cj|2). (BUT remember: this more general case is no longer a projector!)

    ------------------------

    Looks fine.
     
    Last edited: Aug 5, 2004
  9. Aug 5, 2004 #8

    turin

    User Avatar
    Homework Helper

    Does a projector have an inverse. I never thought of it before, but now I'm wondering ... (it doesn't seem to have an inverse)
     
  10. Aug 5, 2004 #9
    idempotent

    P2 = P

    implies

    P-1P2 = P-1P

    P = 1 ;

    the only one with an inverse is the identity

    (alternatively, you can say that on account of a 0 eigenvalue det = 0, and therefore, there is no inverse, except when P = 1)

    (alternatively, you can say that when P ≠ 1 (and visualizing it geometrically) the mapping is MANY-to-ONE, and therefore has no inverse)
     
    Last edited: Aug 6, 2004
  11. Aug 6, 2004 #10
    I think I derived a few things that seem important to me:

    1). For [tex] \psi_1 and \psi_2 \in H [/tex],

    In order for
    [tex] ( P_\psi_1 + P_\psi_2 ) = P_{\psi_1+\psi_2} [/tex]
    then
    [tex] \psi_1 \bot \psi_2 i.e. ( \psi_1 , \psi_2 ) = 0 [/tex]

    2). From that,

    [tex] \sum_n P_{e_n} = P_{\sum_n e_n} [/tex]
     
  12. Aug 6, 2004 #11
    I verified that:

    For [tex] \psi and \psi_n \in H [/tex],

    where [tex] \psi_n [/tex] are eigenbasis of A,

    this holds true:
    [tex] < A | P_\psi > = \sum_n c_n \overline{c_n} E_n [/tex]

    where
    [tex] \psi = \sum_n c_n \psi_n [/tex].
     
  13. Aug 6, 2004 #12
    A question is now whether [tex] P_{e_n} [/tex] can serve as a basis ( or generator ) of GL(H) or A(H)?
     
  14. Aug 6, 2004 #13
    If
    [tex] \sum_n a_n P_{e_n} = P_{\sum_n a_n e_n} [/tex]
    is true, then [tex] P_{e_n} [/tex] can not serve as a basis, because all its linear combinations are also projectors then.

    But [tex] 2 P_{e_n} [/tex] does not seem to be the projector for
    [tex] 2 e_n [/tex]; so in general, there seems to be possibility.
     
  15. Aug 6, 2004 #14
    Do you mean "(a) implies (b)", "(b) implies (a)", or both ?


    Yup.

    --------------------------

    You didn't mention that the En are the eigenvalues of A. (... I assume you assumed <ψ|ψ> = 1.)

    --------------------------

    ... All of this will become much, much simpler once you start using Dirac notation.

    --------------------------------------------
     
  16. Aug 6, 2004 #15
    Eye,

    Thanks.

    I have a question for the lemma 1.1 at page 40 in that book.

    I don't think there is anythings that says the eigenbasis of a self-adjoint operator can always span the entire Hilbert space.

    If that's the case, I can write a

    [tex] \psi = \sum_i a_i \Psi_i + b \psi_\bot [/tex]

    where

    [tex] \Psi_\bot \bot \Psi_n [/tex], for all n.

    [tex] ( M (\psi) , \psi ) [/tex]
    might not equal to
    [tex] ( \sum_i a_i P_\psi_n ( \psi ) , \psi ) [/tex].
     
  17. Aug 7, 2004 #16
    Eye,

    What I found is below is the sufficient condition for this Lemma to be true:

    [tex] ( M (\psi) , \psi_\bot ) = 0 [/tex]

    Any Way, it there is a countable basis, this seems to be true.

    I aslo found I can easily set up a self-adjoint operator that only has limited number of eigen values and map the rest of basis to zero.
     
  18. Aug 8, 2004 #17

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    No it doesn't, except for the trivial projector on the whole space. The "feel it in the bones" argument is that when you project something, you don't know what the orthogonal component was (the one that has been filtered out).
    The mathematical argument is of course that a non-trivial projector has 0 eigenvalues (of the orthogonal directions that have been filtered out!).

    In fact, this property is at the heart of the "measurement problem" in QM: a projection (such as happens in von Neuman's view) can never be the result of a unitary evolution.

    cheers,
    Patrick.
     
  19. Aug 8, 2004 #18
    GL(n,C) is a group with respect to matrix multiplication. It is not "closed" under matrix addition, and therefore, does not have the vector space structure you are assuming in your question. On the other hand, the full set of n x n matrices with entries from C, M(n,C), is "closed" under matrix addition (as well as, multiplication, of course). So, you might want to pose your question with respect to M(n,C).

    In that case, for n > 1, the answer is "no". M(n,C) has dimension n2, whereas, the Pi will span a subspace with dimension no larger than n (of course, the Pi are in fact linearly independent, so they will span a subspace of dimension equal to n).
     
    Last edited: Aug 8, 2004
  20. Aug 8, 2004 #19
    The object on the left-hand-side is not (in general) a projector. That object has eigenvalues an , whereas a projector has eigenvalues
    0 and / or 1.

    --------------------------

    At this juncture, it is instructive to consider ordinary 3-D Euclidean space. Pick any unit vector n. Then, the projector corresponding to this unit vector is given by

    [2] Pn(v) = (vn) n , for any vector v .

    The description of [2] is "the projection of v along n". Do you remember what this means geometrically? (see figure).

    --------

    NOTE:

    In Dirac notation, [2] becomes

    Pn|v> = <n|v> |n> = |n> <n|v> = (|n><n|) |v> , for any |v> .

    We therefore write:

    Pn = |n><n| .

    If you think of each ket as a column matrix and the corresponding bra as its Hermitian transpose (a row matrix) then this notation can be taken "literally".

    --------------------------

    I suggest you reserve the symbol "P", in the above type of context, only for a "projector" proper. Also, I suggest you invoke the rule that the "subscript" of P is always a "unit" vector. These two prescriptions would then disqualify the "legitimacy" of the right-hand-side of [1] on both counts.

    At the same time, if you want to consider generalizations for which a relation like [1] holds, then use a symbol "R" (or whatever else) instead of "P". The generalization of [2] which gives a relation like [1] is then simply:

    [2'] Ru(v) = [ v ∙ (u/|u|) ] u , for any vector v .

    But what is the motivation for reserving a special "symbol" for this operation? Its description is "project the vector v into the direction of u and then multiply by the magnitude of u". The meaningful aspects of this operation are much better expressed by writing the corresponding operator as |u|P(u/|u|).

    --------------------------

    Now, let's go back the definition I gave in post #2.


    I am now strongly suggesting that we, instead, use the following as our "official" definition:

    *************************
    * 1a) Given any unit vector e, definite the "projector onto e" by:
    *
    * Pe(v) = (v,e) e , for any vector v .
    *
    * Such a projector is said to be "1-dimensional".
    *
    * 1b) An operator P is said to be a "projector" if (and only if)
    * it can be written as a sum of 1-dimensional projectors
    * which project onto mutually orthogonal unit vectors.
    *
    *************************

    This definition [1a) and 1b) taken together] is equivalent to the original one I gave. But I think it makes the meaning of "projector" much clearer.

    ------------------------------------------------------

    All that I have said above should clarify matters like:


    -----------
     

    Attached Files:

    Last edited: Aug 8, 2004
  21. Aug 9, 2004 #20
    [Note: you wrote "eigenbasis", when you meant "eigenvectors".]

    The answer to your query is given in that book by Theorem 1.1 (called "The Spectral Theorem"), on p. 38. In simple language, it is saying that the answer is: "Yes, a self-adjoint operator will always have eigenvectors (or "generalized" eigenvectors) spanning the entire Hilbert space."

    HOWEVER, you must NOTE that the definition of "self-adjoint" (in the case of an infinite-dimensional Hilbert space) is nontrivial (... in that book, the appropriate definition is given at the top of p. 36, in 1.1.1 Notations).

    ------------------------

    For the sake of giving you (at least) something, here is some basic information. Let A be a linear operator acting in the Hilbert space.

    Definition: A is "symmetric" iff <g|Af> = <Ag|f> for all f,g Є Domain(A).

    Definition: A is "self-adjoint" iff: (i) A is symmetric; (ii) the "adjoint" At exists; and (iii) Domain(A) = Domain(At).

    Lemma: At, the "adjoint" of A, exists iff Domain(A) is dense in the Hilbert space.

    All that is missing in the above is a definition of "adjoint" (which I have omitted for the sake of brevity and simplicity). That definition would then give us a specification of Domain(At) and thereby complete the definition of "self-adjoint".

    ------------------------

    Now, you might ask: How can it be that there is a linear operator A with a domain "smaller" than the whole Hilbert space, yet, at the same time, A has eigenvectors which span the entire space?

    Well, first of all, this can only happen in an infinite-dimensional Hilbert space. Suppose A has eigenfunctions φn(x) with corresponding eigenvalues an. So,

    [1] Aφn(x) = anφn(x) .

    Since the φn(x) span the entire space, an arbitrary element ψ(x) of the space can be written as

    [2] ψ(x) = Σn cnφn(x) .

    The right-hand-side of [2] is an infinite sum, and, therefore, involves a limit. While every finite subsum is necessarily in the domain of A, it is possible that in the limit of the infinite sum, the resulting vector is no longer in that domain. ... As you can see, this sort of phenomenon can only occur when the Hilbert space is infinite-dimensional.

    But what do we get if we, nevertheless, attempt to "apply" A to ψ(x) by linearity and use [1]? Let's try it:

    Aψ(x) = Σn cnn(x)

    = Σn ancnφn(x) [3] .

    As you may have guessed, when ψ(x) is not in Domain(A), the following occurs in [3]: while every finite subsum is necessarily an element of the Hilbert space, in the limit of the infinite sum the "result" is no longer in the Hilbert space.

    --------
     
  22. Aug 10, 2004 #21
    Eye,

    I wanted to print your response in oder to read it clearer. Unfortunatelly, I got some troubles with my printer. Hopefully I can print it tomorrow.

    One of the issue I saw in your last response is that you seem to extend your defintion of self-adjoint to "non-trivial" one, my guess is you want to extend that to the one with eigenvectors that can span the entire Hilbert space.

    Is it necessary?

    Because if you do so, then [tex] P_{e_n} [/tex] is not self-adjoint then. Note it has only [tex] e_n [/tex] as its eigen vector.

    I did overlook that theorem, I shall look closer into it.

    Any way, I found these facts that help me to see it clearer:
    Below, I was assuming a relaxed self-adjoint definition.

    1).
    [tex] \sum_n a_n P_{e_n} [/tex]
    can be simply represented as the diagonal matrix as [tex] diag( a_1, a_2, a_3 ... ) [/tex].

    So, it actually spans the group ( or ring ) formed by the matrix with only diagonal elements with values.

    2). In general, they are not self-adjoint unless all [tex] a_i [/tex] are real.

    3). If all [tex] a_i [/tex] are real, for any [tex] a_i [/tex] not zero, [tex] e_i [/tex] is one of its eigenvectors.

    4). In particular, if
    [tex] \sum_n a_n = 1 [/tex] and [tex] a_n >= 0 [/tex]
    then it's a "state". If more than one [tex] a_n > 0 [/tex] , then it's a mixed state.

    5). This also tells me that a set of { [tex] P_{e_n} [/tex] } is defintely not enough to span ( or generate ) all states, even though the { [tex] e_n [/tex] } can span the Hilbert Space.
     
  23. Aug 10, 2004 #22
    Eye,

    By the way, are you Leon?

    In the page 37, the mapping defining projection valued measure is not necessary 1-1 & onto, right?
     
  24. Aug 10, 2004 #23
    Every vector orthogonal to en is an eigenvector of Pn with eigenvalue zero. Clearly, Pn has a complete set of eigenvectors.


    1) The only problematic part here is the expression "spans the group (or ring)". True, the said objects form a group with respect to +, and monoid with respect to ∙ , and that defines a ring. But when you talk about spanning, you are thinking of the of the group aspect (with the + operation) over the field C. This gives a vector space ... and if you want to acknowledge the monoid aspect with respect to ∙ , then it's called an (associative) algebra. In short, the simplest correct thing to say is:

    So, it actually spans the vector space formed by the matrices ... over C.

    (... if you have a "thing" for such terminologies try mathworld)

    2) True.

    3) This is the same error as the one identified at the beginning of this post ... eigenvalues can be 0 (it's the eigenvectors! which can't).

    4) True.

    5) True, the Pn are not enough. But what is the "this" that tells you?
     
  25. Aug 10, 2004 #24
    Eye,

    Got you. My error was assuming zero can not be eigenvalue.
     
  26. Aug 10, 2004 #25
    Eye,

    About (5), what it means to me is that I will have no gurantee that I can generate a "position" eigenstate by linear combination of "energy"'s eigenstates even though I can generate its eigenfunction by "energy"'s eigenfunction. Or, a mixed state by "energy"'s eigenfunction won't equal to any combination of "postion"s' pure states.

    I followed through the rest to page 45. Operators P and Q should be unbounded. What does it exactly mean by that?

    Thanks
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook