What exactly is the Sigma-ket-bra?

  • Thread starter Thread starter TimH
  • Start date Start date
TimH
Messages
56
Reaction score
0
I'm teaching myself QM using Zettili's book. I'v been chugging along seeing how linear algebra works in infinite-dimensional Hilbert space. Now the author has introduced the "completeness relation," that for an infinite set of orthonormal kets |Xn> forming a basis, Sigma(n=1 to infinity) of |Xn><Xn|= I (the unit operator).

Now this notation for the unit operator keeps coming up in proofs, e.g. in showing how to change bases, etc. I can see that in a matrix representation with |Xn> a column vector of coordinates and |Xn> a row vector of conjugates of coordinates, |Xn><Xn| gives a matrix, i.e. is a linear operator.

I don't have any sort of physical intuition though, as to what this sigma expression means. I know the bra-ket is a scalar product and can be thought of as the projection of the ket onto the bra. So the sigma-ket-bra is an infinite sum of linear operators (matricies). Is there anything more that can be said about it? Could somebody make this sigma-ket-bra expression a little less opaque? Thanks.
 
Physics news on Phys.org
No i can't help you. my linear algebra and quantum mechanic professors have been abysmal and I did not take what I could have from the course. But what I can do is tell you that i like the use of opaque
 
I know that if we take a real (i.e. non-complex) set of basis vectors, e.g. (1,0,0), (0,1,0), and (0,0,1) in the 3-dimensional case, and treat the column vectors of these as kets and row-vectors of these as bras (since the conjugate is the same value), then we get three 3x3 matricies that are all zero except for one element in each. And then when we do the sigma and add them, we get the identity matrix. So I see how you can do this with infinitely-long kets and bras, and still get the identity matrix.

I was just hoping there was something more intuitive to say about it.

So it would be...less opaque...:)
 
Hi Tim,

Check out the description in the first chapter of Sakurai's "Modern Quantum Mechanics". He gives a very nice description of this. It is quite short and sweet, so you may need to ponder it a bit and expand some of the derivations on your own.

Essentially, if \left|i\right\rangle and \left|j\right\rangle are n-dimensional basis vectors, the "ket-bra" \left|j\right\rangle\left\langle i\right| describes an n x n matrix (operator), with a 1 in the ith-column and j-th row, and zeroes everywhere else. So the "resolution of the identity" just says that you can generate the identity matrix by summing all of the "ket-bra"'s that index diagonal elements of the matrix, i.e.:

\sum^{n}_{i=1}\left|i\right\rangle\left\langle i\right| = I

where I is the n x n identity matrix (as you have probably seen).

Does this help?

EDIT: Ok ... I didn't see your 2nd post before I wrote this, I guess you got this part. Do you get the part about how this can be used to transform between two independent bases? That is, the diagonal ket-bras act individually as projection operators to project the ith component (in the new basis) of the vector in the original basis (the math is clearer than I can make the language).
 
Last edited:
SpectraCat said:
That is, the diagonal ket-bras act individually as projection operators to project the ith component (in the new basis) of the vector in the original basis (the math is clearer than I can make the language).

Thanks for the post, SpectralCat. This description is the kind of thing I was looking for and definitely helps. I'll take the basis-conversion formula in my book and try some conversions in real 2-and 3-d space between simple bases to see what you're describing.

Its like...I know QM math is "abstract" because its in Hilbert space, but if there is a Euclidean analog for any of the structure I want to at least have a firm handle on that.
 
One way to think about it is to check that it really is the identity operator.
Given a basis, every vector in the vector space has a unique expansion:
|\psi\rangle = \sum_{n} c_n|x_n\rangle
Now act on this with the operator \sum_{n}|x_n\rangle\langle x_n| and see what you get.
When the basis is orthonormal, the operator is the identity.
If you think hard about why it works, and why the basis has to be orthonormal, then you might develop a better mental picture.
(You can of course do this in 2 or 3d explicitly if you want.)

edit: and remember to have a different dummy index 'n' for the two summations!
 
Read this post if you want to understand bra-ket notation. It can't be explained in intuitive "physical" terms, because it isn't physics, it's just math.

"Linear algebra in infinite dimensions" isn't called "linear algebra". It's called "functional analysis". (That terminology doesn't make a whole lot of sense, but that's what it's called).
 
Back
Top