jeebs said:
So, you do that and presumably every time you do that integral correctly, you end up with some number that slots into the matrix in its particular row and column. My next question is, why does this work? Why did whoever first thought of doing that integral decide that they could get the resulting numbers that come out and put them in a matrix? Is this a complicated thing that I don't really need to know as an undergraduate?
No, it's not really anything complicated. Basically, any time you're working with things that can be formed out of linear combinations of other things, you can take the coefficients of the linear combination and put them in a vector. And then you can create matrices to represent linear transformations on the vectors. (This is why it's called "linear algebra" - it's algebra applied to linear combinations.) So once the idea of linear combinations of quantum states first came up, it wasn't much of a stretch from there to using vectors and matrices.
Incidentally, I believe it was Heisenberg who first formulated quantum mechanics in terms of matrices. He had to do some thinking to figure out how to work with infinite-dimensional matrices, but other than that I suppose it was reasonably straightforward.
I don't think this is the way Heisenberg originally did it, but it's easy to justify this use of vectors and matrices in QM. First you need to know that the basis is complete:
\sum_{\text{all states}}\lvert\phi_n\rangle\langle\phi_n\rvert = 1
The completeness property may look a tad weird, by the way, if you're not familiar with using states in the form \lvert a\rangle\langle b\rvert. All it's saying is that if you take that sum and sandwich it between
any two states, you get the same result as if you put the identity operator in the middle:
\langle\psi_a\rvert\biggl(\sum\lvert\phi_n\rangle\langle\phi_n\rvert\biggr)\lvert\psi_b\rangle = \langle\psi_a\rvert\mathbf{1}\lvert\psi_b\rangle = \langle\psi_a\vert\psi_b\rangle
Anyway, given those two properties, all you have to do is look at what happens when you multiply out the matrix and vector representations of an operator and a state respectively.
\begin{pmatrix}\langle 0\lvert\hat{O}\rvert 0\rangle & \langle 0\lvert\hat{O}\rvert 1\rangle & \cdots \\<br />
\langle 1\lvert\hat{O}\rvert 0\rangle & \langle 1\lvert\hat{O}\rvert 1\rangle & \cdots \\<br />
\vdots & \vdots & \ddots\end{pmatrix}<br />
\begin{pmatrix}\langle 0\vert \psi\rangle \\<br />
\langle 1\vert \psi\rangle \\<br />
\vdots\end{pmatrix} =<br />
\begin{pmatrix}\langle 0\lvert\hat{O}\rvert 0\rangle\langle 0\vert \psi\rangle + \langle 0\lvert\hat{O}\rvert 1\rangle\langle 1\vert \psi\rangle + \cdots\\<br />
\langle 1\lvert\hat{O}\rvert 0\rangle\langle 0\vert \psi\rangle + \langle 1\lvert\hat{O}\rvert 1\rangle\langle 1\vert \psi\rangle + \cdots\\<br />
\vdots\end{pmatrix}
You'll notice that each entry in the resulting column vector takes the form
\langle m\rvert\hat{O}\sum \rvert n\rangle\langle n\vert\psi\rangle
(where
m is the row number) and using the completeness property, this is equivalent to
\langle m\rvert\hat{O}\lvert\psi\rangle
But remember, that's just one row of the vector. As you'd expect, it's the same thing you get if you consider \hat{O}\lvert\psi\rangle to be a state and compute the "vector elements" of the state.
You can go on to compute an expectation value
\langle\psi\rvert\hat{O}\lvert\psi\rangle
from the above result:
\begin{pmatrix}\langle\psi\vert 0\rangle & \langle\psi\vert 1\rangle & \cdots\end{pmatrix}<br />
\begin{pmatrix}\langle 0\rvert\hat{O}\lvert\psi\rangle \\<br />
\langle 1\rvert\hat{O}\lvert\psi\rangle \\<br />
\vdots\end{pmatrix}<br />
= \langle\psi\vert 0\rangle\langle 0\rvert\hat{O}\lvert\psi\rangle + \langle\psi\vert 1\rangle\langle 1\rvert\hat{O}\lvert\psi\rangle
Again, you can use the completeness property to simplify that sum from
\sum\langle\psi\vert m\rangle\langle m\rvert\hat{O}\lvert\psi\rangle
to
\langle\psi\rvert\hat{O}\lvert\psi\rangle
Basically, you can see that if you take the matrix/vector representations of states and operators and do matrix multiplication with them, you get the same results as if you just write them out as bras and kets. So everything's consistent.
jeebs said:
Also, I don't think I have came across any matrices that have non-integer matrix elements in the problems I have done (only things like the angular momentum operator matrices). Does this mean that I should always expect that computing that integral will give me an integer number, in anything I come across in QM?
No, in general the matrix elements can be any
complex numbers. So fractions, decimals, irrationals, imaginary numbers, etc. are all fair game. Usually for teaching they pick "simple" numbers (such as integers) to keep the calculations from getting out of hand, though.
jeebs said:
The other thing I need to get straight in my head is about basis states that you mentioned. As I understand it, if you've got a wavefunction for a particle |\psi> before any measurement is made, you can write it as the superposition \sum c_i |\phi _i>. Then, when you make your measurement of some quantity, you find the particle to be in a specific state |\phi _i>, the probability of which is given by the square of the constant ci - and, if I'm not mistaken, the basis is what we call the collection of individual |\phi_i>'s, right?
That's right, the basis is the collection \lvert\phi_i\rangle's. Just like when you're describing some object's coordinates in 3D space, you have basis vectors \hat{x},\hat{y},\hat{z}.
Now, when you make a measurement of some particular operator/quantity for some particle, you will find the particle to be in one of the eigenstates of
that operator. If you were using the eigenstates of that operator as your basis all along, then yes, the state will be one of the \lvert\phi_i\rangle's. But if you were using a different basis, you'll find that the particle is not in one of your basis states; it'll still be a linear combination of them. But the particular linear combination you find will be an eigenstate of the operator corresponding to the quantity you measured.
For example, it's common to use energy eigenstates as a basis. If you measure, say, angular momentum, you will find that the particle's state after the measurement is an eigenstate of angular momentum. But that won't necessarily be one of your basis states.
jeebs said:
But, we can work out what every measurement possibility is from eigenvalue equations?
So, if I'm not mistaken, when calculating, say, the particle's energy, you operate on the wavefunction with the energy operator to get the eigenvalue equation:
\hat{H}\psi = E\psi and using the superposition thing I mentioned above, this can be written as the sum of the individual basis states |\phi_i> each multiplied by their specific energy eigenvalue Ei.
My question is, am I right in thinking that when calculating other observable quantities like, say, momentum, when you do \hat{p}\psi = p\psi (with p being the possible momentum eigenvalues) you are not using the same \psi or \phi as what you used when you calculated the energy eigenvalues?
In short, can you use two different operators operating on the same object?
Well, you can have two different operators operating on the same state. You can apply any operator to any state. But I'm not sure I quite understand what you're asking.
When you do \hat{p}\psi = p\psi, the \psi and \phi_i might be the same.