Homework Help: Wondering about operators and matrix representations?

1. Aug 28, 2010

jeebs

Hi,
this isn't a homework question per se (it's the summer hols, I'm between semesters) but it's something that I never really got during the QM module I just did. I found myself blindly calculating exam & homework problems, and just feel like this is some stuff I should get cleared up.

Firstly, in my course we've been introduced to wavefunctions and the way you use the various operators on them to calculate physical quantities. However, we were never shown where these operators came from and they are a bit weird to look at. For instance, take the momentum operator, $$\hat{p} = i\hbar\frac{d}{dx}$$. Where does that come from? Nothing about that suggests any link to the classical expression p = mv to my eyes...

The other thing I was wondering about was the matrix representation of the operators. We were often given problems where the matrix form of an operator was already stated, but we didn't really see where these came from. Again I've just been calculating blindly. How do you turn the "normal" form of an operator into a matrix?
Similarly, I'm used to the eigenvalue equation, where you operate on an eigenfunction to get a specific eigenvalue. However, I was introduced to this with the "normal" way of writing out eigenfunctions, like, (random example) $$\phi(x) = Acos(\frac{n\pi x}{2})$$ or something. I'd often see problems where I'd be operating on some "eigenvector", so how does the normal form of the eigenfunction get turned into the 1-column matrix/eigenvector thing?

Thanks.

2. Aug 29, 2010

diazona

Well, the quantum operators are a bit weird. (Hopefully you were warned that QM is like that sometimes ) The definition of the momentum operator is actually completely unrelated to the classical definition p = mv; in fact in quantum mechanics, you go the other way around and define velocity as v = p/m. The reason is that QM is formulated as a Hamiltonian theory. You may remember that in classical mechanics, when you use the Hamiltonian (as opposed to, say, the Lagrangian), the fundamental variables are x and p, instead of x and v. The same holds true in quantum mechanics.

As for a justification of sorts, consider a position-space plane wave,
$$\psi(x) = e^{ik_0x}$$
for some specific wavenumber k0. When you apply the momentum operator to this, look at what you get:
$$\hat{p}\psi(x) = -i\hbar (ik_0) e^{ik_0x} = \hbar k_0 e^{ik_0x} = p_0 \psi(x)$$
where $p_0 = \hbar k_0$ is the momentum associated with the wavefunction. So for this particular wavefunction, applying the momentum operator to the function is equivalent to multiplying it by its momentum.

The same technique works for more general wavefunctions as well. Hopefully you are familiar with Fourier decomposition, where you take a wavefunction and express it as a linear combination of exponential terms with various coefficients,
$$\psi(x) = \frac{1}{\sqrt{2\pi}}\int\phi(k)e^{ikx}\mathrm{d}k$$
You may know Φ(k) as the momentum-space wavefunction, because it expresses how much of the wavefunction exists at a particular momentum (so to speak). Applying the momentum operator to this gives you
$$\hat{p}\psi(x) = -\frac{i\hbar}{\sqrt{2\pi}}\int\phi(k)ik e^{ikx}\mathrm{d}k = \frac{1}{\sqrt{2\pi}}\int\phi(k)(\hbar k) e^{ikx}\mathrm{d}k$$
Since a general wavefunction doesn't have one specific value of momentum, you couldn't just multiply it by some number for momentum - but what the operator does is basically split the wavefunction into its components which have well-defined momenta and multiply each of those by its momentum.

Regarding your other question, that's easy: given an operator $$\hat{O}$$, you can compute its matrix elements as
$$\langle m \lvert \hat{O} \rvert n \rangle = \int \phi_m^{*} \hat{O} \phi_n \mathrm{d}x$$
Here the states m and n range over the basis of the state space. For example, consider the 1D particle-in-a-box example. It has discrete energy eigenstates which can be numbered with integers starting at 0, and any arbitrary state of the particle can be expressed as a linear combination of those eigenstates (which is why the energy eigenstates form a basis for all possible states). If you wanted to calculate the matrix elements of some operator in the energy eigenstate basis, you would start with
$$\langle 0 \lvert \hat{O} \rvert 0 \rangle$$
and put that in row 0, column 0 of your matrix (i.e. the top left). Then move on to, say,
$$\langle 0 \lvert \hat{O} \rvert 1 \rangle$$
and put that in row 0, column 1 (just to the right of the top left). Of course, there are an infinite number of these matrix elements, so you couldn't calculate them all, but you see the pattern.

The process of creating a vector out of a wavefunction is pretty similar. Again, you have to have a set of states that form a basis of the state space. Any possible wavefunction can be expressed as a linear combination of these basis states. You can compute the "vector elements" of the state $\lvert\psi\rangle$ as
$$\langle n\vert\psi\rangle = \int \phi_n^{*}\psi\,\mathrm{d}x$$
For a system like the 1D particle in a box, the vector will be of infinite length (because there are an infinite number of basis states).

So basically, once you choose a particular basis of states for a quantum system, you can express any state of the system as a vector of complex numbers,
$$\begin{pmatrix}\langle 0\vert \psi\rangle \\ \langle 1\vert \psi\rangle \\ \vdots\end{pmatrix}$$
and any operator that acts on that state as a matrix of complex numbers,
$$\begin{pmatrix}\langle 0\lvert\hat{O}\rvert 0\rangle & \langle 0\lvert\hat{O}\rvert 1\rangle & \cdots \\ \langle 1\lvert\hat{O}\rvert 0\rangle & \langle 1\lvert\hat{O}\rvert 1\rangle & \\ \vdots & \vdots & \ddots\end{pmatrix}$$
Having done that, you can use matrix arithmetic for all your calculations, and you can apply concepts of linear algebra to quantum problems.

3. Aug 29, 2010

boyu

I'd like to introduce to you a textbook which may be helpful.

Its name is 'Introduction to Quantum Mechanics', written by David J. Griffiths. You may want to refer to Page 15 & 16, where the momentum operator is introduced. Actually the author used a classical way to derive it, i.e., p = mv = m*dx/dt.

If you ask where this momentum operator comes from, probably I would mention Born's statistical interpretation of wave function, which is the starting point of derivation of momentum operator from the above textbook.

4. Aug 29, 2010

jeebs

Right thanks for that answer man, a lot of it does ring a bell actually. It's been helpful but it's raised more questions.

So, you do that and presumably every time you do that integral correctly, you end up with some number that slots into the matrix in its particular row and column. My next question is, why does this work? Why did whoever first thought of doing that integral decide that they could get the resulting numbers that come out and put them in a matrix? Is this a complicated thing that I don't really need to know as an undergraduate?
Also, I don't think I have came across any matrices that have non-integer matrix elements in the problems I have done (only things like the angular momentum operator matrices). Does this mean that I should always expect that computing that integral will give me an integer number, in anything I come across in QM?

The other thing I need to get straight in my head is about basis states that you mentioned. As I understand it, if you've got a wavefunction for a particle $$|\psi>$$ before any measurement is made, you can write it as the superposition $$\sum c_i |\phi _i>$$. Then, when you make your measurement of some quantity, you find the particle to be in a specific state $$|\phi _i>$$, the probability of which is given by the square of the constant ci - and, if I'm not mistaken, the basis is what we call the collection of individual $$|\phi_i>$$'s, right? But, we can work out what every measurement possibility is from eigenvalue equations?
So, if I'm not mistaken, when calculating, say, the particle's energy, you operate on the wavefunction with the energy operator to get the eigenvalue equation:
$$\hat{H}\psi = E\psi$$ and using the superposition thing I mentioned above, this can be written as the sum of the individual basis states $$|\phi_i>$$ each multiplied by their specific energy eigenvalue Ei.
My question is, am I right in thinking that when calculating other observable quantities like, say, momentum, when you do $$\hat{p}\psi = p\psi$$ (with p being the possible momentum eigenvalues) you are not using the same $$\psi [tex] or [tex] \phi [tex] as what you used when you calculated the energy eigenvalues? In short, can you use two different operators operating on the same object? In other words, can you have two different looking operator matrices that operate to calculate the same eigenvalues? 5. Aug 29, 2010 vela Staff Emeritus Presumably, you've taken a course in linear algebra. This matrix stuff works because the operators in quantum mechanics are linear operators. As you may recall from linear algebra, any linear operator can be represented by a matrix. That's what you're doing here. No. Not exactly. I think you're making the common error students make of equating taking a measurement with applying the corresponding operator. Each observable corresponds to a Hermitian operator. You solve an eigenvalue equation to find the eigenvalues {λ1, λ2,...} and eigenstates {ϕ1, ϕ1, ...} for this operator. The eigenvalues give you the possible outcomes of a measurement of the quantity. To find the probability of an outcome for a given state $|\psi\rangle$, you find $\langle \phi_i | \psi \rangle$, the overlap of the state with the corresponding eigenstate. That gives you the amplitude whose modulus squared is the probability. Equivalently, you can express the state as a linear combination of the eigenstates, and use the coefficients to find the probabilities, as you described. Note that you never apply the operator to the given state in this process. You're mixing up some concepts here. First, an observable will always correspond to one operator. The matrix representation of that operator, however, depends on the basis you've chosen. Say you have a momentum operator P. If you use the basis consisting of its eigenstates, you'll find P can be represented by a diagonal matrix. If you choose a different basis like the energy eigenstates, you'll find that P corresponds to a different matrix. The same idea applies to states. You can have the particle in a state $|\psi\rangle$, but the representation of that state depends on the basis you've decided to work on. Last edited: Aug 29, 2010 6. Aug 29, 2010 diazona No, it's not really anything complicated. Basically, any time you're working with things that can be formed out of linear combinations of other things, you can take the coefficients of the linear combination and put them in a vector. And then you can create matrices to represent linear transformations on the vectors. (This is why it's called "linear algebra" - it's algebra applied to linear combinations.) So once the idea of linear combinations of quantum states first came up, it wasn't much of a stretch from there to using vectors and matrices. Incidentally, I believe it was Heisenberg who first formulated quantum mechanics in terms of matrices. He had to do some thinking to figure out how to work with infinite-dimensional matrices, but other than that I suppose it was reasonably straightforward. I don't think this is the way Heisenberg originally did it, but it's easy to justify this use of vectors and matrices in QM. First you need to know that the basis is complete: [tex]\sum_{\text{all states}}\lvert\phi_n\rangle\langle\phi_n\rvert = 1$$
The completeness property may look a tad weird, by the way, if you're not familiar with using states in the form $\lvert a\rangle\langle b\rvert$. All it's saying is that if you take that sum and sandwich it between any two states, you get the same result as if you put the identity operator in the middle:
$$\langle\psi_a\rvert\biggl(\sum\lvert\phi_n\rangle\langle\phi_n\rvert\biggr)\lvert\psi_b\rangle = \langle\psi_a\rvert\mathbf{1}\lvert\psi_b\rangle = \langle\psi_a\vert\psi_b\rangle$$

Anyway, given those two properties, all you have to do is look at what happens when you multiply out the matrix and vector representations of an operator and a state respectively.
$$\begin{pmatrix}\langle 0\lvert\hat{O}\rvert 0\rangle & \langle 0\lvert\hat{O}\rvert 1\rangle & \cdots \\ \langle 1\lvert\hat{O}\rvert 0\rangle & \langle 1\lvert\hat{O}\rvert 1\rangle & \cdots \\ \vdots & \vdots & \ddots\end{pmatrix} \begin{pmatrix}\langle 0\vert \psi\rangle \\ \langle 1\vert \psi\rangle \\ \vdots\end{pmatrix} = \begin{pmatrix}\langle 0\lvert\hat{O}\rvert 0\rangle\langle 0\vert \psi\rangle + \langle 0\lvert\hat{O}\rvert 1\rangle\langle 1\vert \psi\rangle + \cdots\\ \langle 1\lvert\hat{O}\rvert 0\rangle\langle 0\vert \psi\rangle + \langle 1\lvert\hat{O}\rvert 1\rangle\langle 1\vert \psi\rangle + \cdots\\ \vdots\end{pmatrix}$$
You'll notice that each entry in the resulting column vector takes the form
$$\langle m\rvert\hat{O}\sum \rvert n\rangle\langle n\vert\psi\rangle$$
(where m is the row number) and using the completeness property, this is equivalent to
$$\langle m\rvert\hat{O}\lvert\psi\rangle$$
But remember, that's just one row of the vector. As you'd expect, it's the same thing you get if you consider $$\hat{O}\lvert\psi\rangle$$ to be a state and compute the "vector elements" of the state.

You can go on to compute an expectation value
$$\langle\psi\rvert\hat{O}\lvert\psi\rangle$$
from the above result:
$$\begin{pmatrix}\langle\psi\vert 0\rangle & \langle\psi\vert 1\rangle & \cdots\end{pmatrix} \begin{pmatrix}\langle 0\rvert\hat{O}\lvert\psi\rangle \\ \langle 1\rvert\hat{O}\lvert\psi\rangle \\ \vdots\end{pmatrix} = \langle\psi\vert 0\rangle\langle 0\rvert\hat{O}\lvert\psi\rangle + \langle\psi\vert 1\rangle\langle 1\rvert\hat{O}\lvert\psi\rangle$$
Again, you can use the completeness property to simplify that sum from
$$\sum\langle\psi\vert m\rangle\langle m\rvert\hat{O}\lvert\psi\rangle$$
to
$$\langle\psi\rvert\hat{O}\lvert\psi\rangle$$

Basically, you can see that if you take the matrix/vector representations of states and operators and do matrix multiplication with them, you get the same results as if you just write them out as bras and kets. So everything's consistent.
No, in general the matrix elements can be any complex numbers. So fractions, decimals, irrationals, imaginary numbers, etc. are all fair game. Usually for teaching they pick "simple" numbers (such as integers) to keep the calculations from getting out of hand, though.

That's right, the basis is the collection $\lvert\phi_i\rangle$'s. Just like when you're describing some object's coordinates in 3D space, you have basis vectors $\hat{x},\hat{y},\hat{z}$.

Now, when you make a measurement of some particular operator/quantity for some particle, you will find the particle to be in one of the eigenstates of that operator. If you were using the eigenstates of that operator as your basis all along, then yes, the state will be one of the $\lvert\phi_i\rangle$'s. But if you were using a different basis, you'll find that the particle is not in one of your basis states; it'll still be a linear combination of them. But the particular linear combination you find will be an eigenstate of the operator corresponding to the quantity you measured.

For example, it's common to use energy eigenstates as a basis. If you measure, say, angular momentum, you will find that the particle's state after the measurement is an eigenstate of angular momentum. But that won't necessarily be one of your basis states.
Well, you can have two different operators operating on the same state. You can apply any operator to any state. But I'm not sure I quite understand what you're asking.

When you do $\hat{p}\psi = p\psi$, the $\psi$ and $\phi_i$ might be the same.

Last edited: Aug 30, 2010