Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Looking for a basis for C^2

  1. Jan 29, 2010 #1
    I'm teaching myself quantum mechanics and am learning about bra-ket notation. There is a particular operator used, the ket-bra (e.g. |X><X|). To understand it, I'm trying to come up with an orthonormal basis for C^2 as a simple case (i.e., the 2-dimensional vector space over the field of complex numbers). That is, I want two vectors, each with two components, each component a complex number, that span C^2 and are orthonormal. I've tried some combinations like (1,0) (0,i) and such, but no luck. Right now my TI-89 is chugging away looking at a few thousand possible vector combinations, but there has to be a better way.

    Can anybody suggest how I would find two such vectors? Thanks.
  2. jcsd
  3. Jan 29, 2010 #2

    Physics Monkey

    User Avatar
    Science Advisor
    Homework Helper

    Hi TimH,

    I'm a bit confused by your question. It seems to me that you've already found such a basis. Why do you think [tex] (1,0) [/tex] and [tex] (0,i)[/tex] don't work?
  4. Jan 29, 2010 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    (1,0) and (0,1) works too. (z,w)=z(1,0)+w(0,1). (You must have misunderstood some definition).

    See this post for more about bra-ket notation.

    If [itex]|X\rangle[/itex] is a member of [itex]\mathbb C^2[/itex], then [itex]|X\rangle\langle X|[/itex] is a linear operator on [itex]\mathbb C^2[/itex]. To be more precise, it's the projection operator for the one-dimensional subspace spanned by [itex]|X\rangle[/itex]. If you write the vectors as [tex]|V\rangle=\begin{pmatrix}V_1\\ V_2\end{pmatrix}[/tex], then you can write [tex]|X\rangle\langle X|=\begin{pmatrix}X_1\\ X_2\end{pmatrix}\begin{pmatrix}X_1 & X_2\end{pmatrix}[/tex].

    But I'm not a big fan of the "kets are column vectors, bras are row vectors" approach to bra-ket notation. It will give you the right intuition about bras and kets, but it doesn't explain why the notation still works when the vector space is infinite-dimensional. (See the post I linked to instead).
    Last edited: Jan 29, 2010
  5. Jan 29, 2010 #4
    Okay, I figured out what happened. Thank you for your posts. I first tried (1,0) and (0,i) on my TI-89 calculator, using the "completeness relation," i.e. that the ket-bra |x><x| of the two vectors, when added together, should give the identity matrix. It didn't work on the calculator which is why I didn't think this pair worked. Then I did it by hand and it worked. Then I poked around on the calculator and discovered that when you take the transpose of a complex matrix or vector, it gives you the adjoint, which screwed up my formula. A feature, I guess...

    So thank you for persisting and getting me to do it by hand...It would still be nice to find a set of orthonormal vectors in C^2 which don't have any zero-coefficients, i.e. where each component is a full-blown complex number with real and imaginary parts. Are there any well-known examples? I couldn't find anything online.

    This is part of my effort to understand the machinery of Hilbert space, even if the space itself isn't visualizable. Thanks.
    Last edited: Jan 29, 2010
  6. Jan 29, 2010 #5
    Just pick any two complex numbers a & b.

    You can construct an orthogonal basis that consists of two vectors, (a,b) and (b*,-a*).

    You can then normalize it by rescaling those vectors.
  7. Jan 29, 2010 #6
    Thanks Hamster. I figured there was some general form. I'll play around with that.
  8. Jan 29, 2010 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If you're having trouble creating an orthonormal basis, then why not:
    • Use an orthonormalization algorithm? (e.g. Graham-Schmidt)
    • Use an inner-product-preserving transformation to alter a known orthonormal basis?
    • Write down -- and solve -- a system of equations that expresses exactly what you want?
  9. Feb 1, 2010 #8


    User Avatar
    Science Advisor

    Well, I should have thought of this before, but the general solution to the two-state problem is:

    [tex]\left|\alpha\right\rangle=\begin{pmatrix}cos\theta \\ sin\theta e^{i\phi}\end{pmatrix},\left|\beta\right\rangle=\begin{pmatrix}-sin\theta \\ cos\theta e^{i\phi}\end{pmatrix}[/tex]

    So [tex]\left|\alpha\right\rangle[/tex] and [tex]\left|\beta\right\rangle[/tex] are orthonomal for all choices of [tex]\theta[/tex] and [tex]\phi[/tex].

    As I understand it, this works because in the 2-D case, all orthogonal bases can be obtained by rotation of some initial set of diagonal eigenvectors (the ones Fredrik gave in post #3) in the (complex) 2-D Hilbert space. (This explanation may not be strictly correct ... I am still learning the ins and outs of Hilbert spaces).
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook