1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

QM: Formalism and bases

  1. Feb 22, 2009 #1
    1. The problem statement, all variables and given/known data
    Hi all.

    Lets say that we are looking at an operator Q, and two eigenstates given by |1> and |2>. The eigenstates |1> and |2> are not eigenstates of Q, but of some other operator H.

    My question: Is <1|Q|1>, <1|Q|2>, <2|Q|1> and <2|Q|2> independent of the basis used as long as Q, |1> and |2> are given in the same basis?

    Thanks in advance.

    Sincerely,
    Niles.
     
    Last edited: Feb 22, 2009
  2. jcsd
  3. Feb 22, 2009 #2

    Redbelly98

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    Yes.
    Those 4 terms express the elements of Q in terms of the basis {|1>,|2>}. Even when Q, |1>, and |2> are expressed using a different basis.
     
  4. Feb 22, 2009 #3
    Thanks for replying. I tried doing a self-made example to give myself a full understanding of what is happening here. But I cannot get it to work.

    I am looking at a matrix (i.e. the operator Q) and a vector (i.e. |1>) in the basis E spanned by (1,0) and (0,1). They are given by:

    [tex]
    Q = \left( {\begin{array}{*{20}c}
    2 & 0 \\
    3 & 1 \\
    \end{array}} \right) \quad \text{and}\quad
    \left| 1 \right\rangle = \left( {\begin{array}{*{20}c}
    1 \\
    2 \\
    \end{array}} \right).
    [/tex]

    These have just been chosen randomly. Now we compute the above:

    [tex]
    \left\langle 1 \right|Q\left| 1 \right\rangle_{\text{in E}} = \left( {\begin{array}{*{20}c}
    1 & 2 \\
    \end{array}} \right)\left( {\begin{array}{*{20}c}
    2 & 0 \\
    3 & 1 \\
    \end{array}} \right)\left( {\begin{array}{*{20}c}
    1 \\
    2 \\
    \end{array}} \right) = 2.
    [/tex]

    Now, according to the above, this result is supposed to be independent of whatever basis we are using. Now I wish to use the eigenbasis of Q, where the transition matrix from the basis E to the eigenbasis of Q is (this is the inverse of the eigenfunction-matrix):

    [tex]
    S = \left( {\begin{array}{*{20}c}
    {1/3} & 0 \\
    1 & 1 \\
    \end{array}} \right)^{ - 1}.
    [/tex]

    This gives me:

    [tex]
    \left| 1 \right\rangle _{in\,\,Q} = \left( {\begin{array}{*{20}c}
    {1/3} & 0 \\
    1 & 1 \\
    \end{array}} \right)^{ - 1} \left( {\begin{array}{*{20}c}
    1 \\
    2 \\
    \end{array}} \right) = \left( {\begin{array}{*{20}c}
    3 \\
    { - 1} \\
    \end{array}} \right),
    [/tex]

    and thus:

    [tex]
    \left\langle 1 \right|Q\left| 1 \right\rangle _{\text{in eigenbasis of Q}} = \left( {\begin{array}{*{20}c}
    3 & { - 1} \\
    \end{array}} \right)\left( {\begin{array}{*{20}c}
    2 & 0 \\
    0 & 1 \\
    \end{array}} \right) \left( {\begin{array}{*{20}c}
    3 \\
    { - 1} \\
    \end{array}} \right) = 10.
    [/tex]

    But this should equal 2, because it is supposed to be independent of the basis. This is just standard linear algebra and change of bases, so the error cannot be in the above example. Have I misunderstood something?
     
    Last edited: Feb 22, 2009
  5. Feb 22, 2009 #4

    Redbelly98

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    I think Q needs to be symmetric, in order to have it's eigenvectors be orthogonal.
     
  6. Feb 22, 2009 #5
    I have just tried with a symmetric matrix, and it does not match either.
     
  7. Feb 22, 2009 #6
    i can't make it sit right with me why this is so but if you carry out the multiplication in the correct order it works.

    first of all your arithmetic is wrong <1|Q|1> = 12 , not 2. then if you perform a similarity transformation on Q, diagonalize it, represent it in it's eigenbasis we get

    <1|[itex]VDV^{-1}[/itex]|1> but (<1|V)[itex]^{T}[/itex] != [itex]V^{-1}[/itex]|1>. which really is weird to me because you're right, considering you're just representing |1> in another basis by writing down [itex]V^{-1}[/itex]|1> it seems like left term should just be the transpose.
     
    Last edited: Feb 22, 2009
  8. Feb 22, 2009 #7

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The problem is that you transformed the coordinate representation of [itex]\langle 1 |[/itex] incorrectly. If the 'change of basis' transformation for coordinate vectors representing a ket is given by

    [tex]v \mapsto T v[/tex]

    then the corresponding change of basis trasnformation for a bra is

    [tex]w \mapsto w T^{-1}[/tex]


    Only in the case of a unitary transformation ([itex]T^{-1} = T^*[/itex]) can you compute the new coordinates on a bra by simply taking the conjugate transpose of the new coordinates for the ket.



    To put it differently, the coordinate representation of [itex]| \psi \rangle^*[/itex] is not the conjugate transpose of the coordinate representation of [itex]\langle \psi |[/itex], except in the special case where you are expressing everything in terms of an orthonormal basis.
     
  9. Feb 22, 2009 #8
    yea that's what i discovered. to be a little clearer

    if V is an eigenbasis that spans the space and x is the arbitrary vector that we wish to represent in this eigenbasis then we have to solve

    Vx=b , b being the representation of x in the eigenbasis, hence [itex]x=V^{-1}b[/itex] since V spans the space and hence is invertible. but then the bra [itex] x^T=(V^{-1}b)^T=b^T(V^{-1})^T[/itex].

    i think this is where the fact that the transpose is actually a mapping to the dual space, not just juggling the symbolic representation, of the kets becomes relevant. it's my professor in linear algebra had no idea why i would want to think about the transposition of a mapping to the dual space instead of just "flipping" the vector.
     
  10. Feb 22, 2009 #9
    Ahh, yes. I forgot to normalize the eigenvectors of Q and the ket |1>, since we are always working in an orthonormal bases.

    Thanks to all three of you for helping me.
     
  11. Feb 22, 2009 #10
    no no no, they have to be orthogonal and normal. i say orthogonal first because you can always normalize an eigenbasis but not always is the eigenbasis orthogonal. in fact the eigenbasis here is not orthogonal. it will never work for this matrix that you can just flip the eigenvectors. this matrix is not unitary/hermitian/symmetric etc.
     
  12. Feb 22, 2009 #11
    I am a little unsure of how you do this. We have to find the transformed ket and the transformed bra (i.e. the ket and the bra in the new basis). The transformed ket is just straightforward and easy, so that is OK.

    Now the transformed bra is a little tricky, as you write here. We have (I am leaving out complex conjugation):

    [tex]
    (x^{T}) = (V^{-1}b)^T = b^T(V^{-1})^T.
    [/tex]

    I cannot see why I cannot just transpose x in the first place regardsless of the basis we are using?
     
  13. Feb 22, 2009 #12
    i need to hurkyl to confirm this but transposition isn't simply turning the vector 90 degrees. it's a map from vector space 1 to vector space 2 called the dual space of that vector space 1.
     
  14. Feb 22, 2009 #13

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Transpose sort of has a double meaning. (Actually, more, but I won't get into that....)

    The big idea is when you have an inner product on vectors, for any vector v, you can define a linear function

    [tex]f_v(w) := \langle v, w \rangle[/tex]

    This makes [itex]f_v[/itex] a linear functional. In bra-ket notation, v is a ket, and [itex]f_v[/itex] is a bra, and you just write [itex]f_v[/itex] as [itex]\langle v \mid[/itex].


    The other idea is the simple one ice109 described as "rotating the vector 90 degrees". This is certainly a transpose operation too. But, a priori, there's no reason it should be connected with the one I described above.


    But they are connected; the conjugate transpose (second meaning) is exactly the same as transposing (first meaning) with respect to the standard inner product on n-tuples.

    When you are representing everything with respect to an orthonormal basis, the inner product for your vector space becomes the standard inner product on n-tuples. And so, in this special case, you can compute the coordinate vector of a bra by applying the conjugate transpose to the coordinate vector of the corresponding ket.
     
  15. Feb 22, 2009 #14
    so how does this breakdown for our problem? because in taking the product [itex]x^T Q x[/itex] we are evaluating the inner product of x with itself over a nonstandard inner product, non standard metric, namely Q?

    i realize i've jumbled it a bit now that i reread the last paragraph but if you could straighten out the that first statement it would probably instructive for me.

    if you don't represent everything with respect to an orthonormal basis then what is the inner product?
     
  16. Feb 22, 2009 #15

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Um... let's see.

    In the old coordinates, the inner product is given by

    [tex]\langle u, v\rangle = u^* v[/tex]

    If T is the change-of-basis matrix, then in the new coordinates, we must have

    [tex]\langle \mu, \nu \rangle = \langle T^{-1} \mu, T^{-1} \nu \rangle
    = \mu^* \left(T^*)^{-1} T^{-1} \nu = \mu^* (T T^*)^{-1} \nu[/tex]

    and the ket-to-bra transformation has coordinate representation

    [tex] \nu \mapsto \nu^* (T T^*)^{-1} [/tex]

    (Hopefully it's obvious, but I'm using u,v to denote coordinate vectors relative to the old basis, and [itex]\mu, \nu[/itex] to denote coordinate vectors relative to the new basis)
     
    Last edited: Feb 23, 2009
  17. Feb 23, 2009 #16
    I must admit I am a little confused now: You are multiplying a vector from left to a matrix?

    But I think I got my original question answered: The four terms <1|Q|1>, <1|Q|2>, <2|Q|1> and <2|Q|2> give the same result no matter what basis we are in, so it does not necessarily have to be orthogonal. It also does not matter if Q is Hermitian or not. The only thing I have to be cautious about is that if Q is not Hermitian, then the corresponding transformed bra is not the transposed of the transformed ket, but it has to be found using the above described method; correct?
     
    Last edited: Feb 23, 2009
  18. Feb 23, 2009 #17
    Ok, of course it is a bra, and not a ket.

    Generally we have that:

    [tex]
    (x^{T}) = (V^{}b)^T = b^T(V^{})^T.
    [/tex]

    I am using the matrix from the above example, and to find the corresponding bra, I use the above expression. But this gives me the wrong answer still. Then I tried to use:

    [tex]
    w \mapsto w T^{-1},
    [/tex]

    and this worked - but it isn't supposed to. Because you told me that this was only when the matrix A was Hermitian, which it is not in this case.
     
    Last edited: Feb 23, 2009
  19. Feb 23, 2009 #18
    you've misunderstood somewhere, the first case works if A is hermitian. the second case is the general case.
     
  20. Feb 24, 2009 #19
    Great, then it is not strange that is works. But how does one show the general case, i.e. how does one show the following?

    [tex]

    w \mapsto w T^{-1}

    [/tex]
     
  21. Feb 24, 2009 #20

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The easiest is from the fact wv is a scalar, and thus invariant under coordinate changes.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: QM: Formalism and bases
  1. Qm notation (Replies: 6)

  2. QM: spin (Replies: 4)

Loading...