# I Dirac notation

1. Apr 6, 2016

### Kara386

In my lecture notes, it says that

$\left\langle l \right| A_{nm} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| \left| n \right\rangle$
$=\sum_{n,m} A_{nm}\left\langle m \right| \left| \psi \right\rangle \delta_{ln}$
$= \sum_{m} A_{lm} \left\langle m \right| \left|\psi \right\rangle$

And the notes also state that a basis is a set of vectors {$\left| n \right\rangle$}, where n = 1,2,3,...,N. I'm not sure what that means, but presumably this is what $\left| m \right\rangle$ and $\left| n \right\rangle$ are in the summations above? If $\left| n \right\rangle = 1,2,3...N$ and these bra-ket things are like vectors, wouldn't that mean that $\left| n \right\rangle = \left( \begin{array}{c} 1\\ 2\\ ...\\ N\\ \end{array} \right)$? I know that's not true. I think it's meant to be a column vector of unit vectors like the i,j,k used in Cartesian. I don't see how a column vector of unit vectors equates to {$\left| n \right\rangle$}, where n = 1,2,3,...,N.

Essentially, I don't understand the manipulation above at all. I don't know where the delta comes from or why you can write an operator $A$ as $A = \sum_{n,m} A_{nm} \left| n \right\rangle \left\langle m \right|$, which I presume is what's happened here? I think it's meant to be obvious because there's no explanation, but it isn't obvious to me.

I'd really appreciate any help - it's quite a long question! :)

Last edited: Apr 6, 2016
2. Apr 6, 2016

### blue_leaf77

You should check again your note, it cannot be the case that an operator $A$ (left side) can double itself (right side). Moreover, I suspect that the indices $mn$ shouldn't be there because in the usual notation, superscript following an operator represents the matrix element of that operator in the basis specified by the indices. So, $A_{mn}$ will mean a number and hence if one follows this $\langle l | A_{nm} | \psi \rangle = A_{mn}\langle l | \psi \rangle$
Most likely yes.
If $\{ |n\rangle \}$ are a set of orthonormal vectors, then $\langle m|n\rangle = \delta_{mn}$.
There is the so-called completeness relation $\sum_n |n\rangle \langle n| = 1$. Use it twice to the right as well as to the left of $A$ to obtain $\sum_n\sum_m |n\rangle \langle n|A|m\rangle \langle m| = \sum_n\sum_m A_{mn} |n\rangle \langle m|$ where $A_{mn}= \langle n|A|m\rangle$.

Judging from your questions, I am afraid that your issue lies in the basic stage in QM. In this case, it wouldn't be effective to seek the answers to individual questions which may actually not constitute a correct sequence of teaching in the pedagogical aspect of the subject. Therefore, I suggest that you read some introductory books in QM and see how far you can get and come back here in case you encounter a problem.

Last edited: Apr 6, 2016
3. Apr 6, 2016

### Kara386

Quite right, a typo from copying and pasting all the latex stuff. I took it out. The indices are definitely there though.

Is that if $\langle m|$ is also a set of orthonormal vectors, or just any vectors?

Yes, I thought so. The lecture notes are meant to be the basic introduction, but it clearly isn't working. I'll have a look in some textbooks. Thanks!

4. Apr 6, 2016

### blue_leaf77

Then I am pretty sure that the $A$ in the left side of $\left\langle l \right| A_{mn} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| \left| n \right\rangle$ should not have subscripted indices. In this case, the following derivation will make sense
$$\left\langle l \right| A \left| \psi \right\rangle =\langle l| \sum_n |n\rangle \langle n|A \sum_m |m\rangle \langle m| \psi\rangle = \sum_n\sum_m A_{mn} \langle l |n\rangle \langle m| \psi \rangle$$
with $A_{mn} = \langle n|A|m\rangle$.
$\langle m |$ is a complex conjugate of $|m\rangle$ which is also an element of the set $\{|n\rangle\}$.

Last edited: Apr 6, 2016
5. Apr 6, 2016

### secur

n = 1, 2, ... N, it's just an integer index. |n> = (represents, or indexes) a set of basis vectors |1>, |2>, ... |N>. They behave like i, j , k from cartesian coordinates, and can be any (complete) set of orthogonal vectors. Like, in cartesian space you know you can rotate the 3 axes, to get a different (complete) set of (orthogonal) basis vectors. Same applies here in Hilbert Space. But now there are (possibly) infinite list of basis vectors (although here we're dealing only with finite list, up to N); and rotation is done by a unitary matrix. n and m both refer to the same list of basis vectors.

So when you see summing over |n>, that means substitute |1>, then |2>, etc into the expression and sum all the results. Essentially you're taking the dot product of the item on the other side of the bracket with each basis vector one after the other, to get the coefficients of that item in this basis.

there is an N x N matrix where each column (or, row) is one of the basis vectors, as you mention - the same way i,j,k can be put in a single 3 x 3 matrix. The N x N matrix would be the matrix used to rotate a vector to this basis ... skipping details.

The delta subscripted by ln is 1 if l=n and 0 otherwise. Since these are all unit basis vectors (like i,j,k) this delta represents just the dot product. You know that (i,i) (traditional dot product) is 1 but (i,j) is 0. Same idea.

So what delta does, as it's multiplied in a summation over n and m, is keep only the terms where n=l and throw all the others away. That's why the n disappears in the final summation, an it's been replaced by l in A's subscript list.

Hope that helps. In trying to put it simply there may be an inaccuracy or two - for instance I'm ignoring bra vs. ket, and probably the term "dot product" could be objected to; but this is the basic idea.

6. Apr 10, 2016

### Kara386

OK, I've been reading a book by Griffiths on QM, and I think I have a much better idea of what's going on. But there's a line in one of the calculations which states
$\sum_n \left\langle e_m \right| \left| e_n \right\rangle = 1$
$e_m$ and $e_n$ are basis somethings. Vectors maybe. Is that line true because the basis is normalised? The book doesn't explicitly say so, but I think that's the only way it's true? If the inner product is like the dot product, wouldn't an orthonormal basis have an inner product of zero except when m=n, when it would be one?

7. Apr 10, 2016

### vanhees71

The formula doesn't make much sense in the formalism of quantum theory. Nevertheless it's correct, supposed $|e_n \rangle$ are a set of orthonormalized vectors, because then by definition
$$\langle e_m|e_n \rangle=\delta_{mn}=\begin{cases} 1 & \text{if} \quad m=n, \\ 0 & \text{if} \quad m \neq {n}. \end{cases}$$
Then of course the sum over $n$ is $1$, supposed it runs also over $m$.

I only don't see, where you could ever need this formula in quantum theory...

8. Apr 10, 2016

### Kara386

I suppose it might not end up actually being used. It was a chapter dirac notation, so was probably included for completeness more than anything else. Specifically this line came up while it was being demonstrated that operators are represented by their matrix elements in some particular basis.

9. Apr 10, 2016

### George Jones

Staff Emeritus
For context, could you state where in Griffiths (page number and/or equation number) this occurs?

10. Apr 10, 2016

### Kara386

Chapter
Chapter 3, page 120, the step from Equation 3.83 to 3.84 suggests the relation I queried. It's a fantastic book, and I'm sure he stated somewhere earlier about the basis being orthonormal, but I'd forgotten by the time I got to 3.83.

11. Apr 10, 2016

### secur

Let me address these terminology issues.

A "ket", whether part of a basis or not, is, in fact, a vector in Hilbert Space. As such it's similar to i,j or k in Cartesian space (or linear combinations thereof, like 3i-2j+k). But there are some big differences.

Kets have complex coefficients, not real; that makes their behavior very counterintuitive sometimes. Only their direction matters, so they're always normalized to 1. Actually, often we don't bother to normalize them (by dividing by the square root of 2, for instance); instead it's understood; so that can add to the confusion. Since Hilbert space can have infinite dimensions, the basis can be continuous. In that case instead of summing, you have to integrate. So the integral can diverge to infinity if you're not careful. Also, such continuous bases can't easily be "normalized to 1". Finally, unlike Cartesian space we can't casually assume a pre-existing basis like i,j,k; we have to create it "from scratch" using the eigenvectors of an observable.

For instance the electron orbits of a hydrogen atom (s, p, d etc) are the eigenvectors of the energy Hamiltonian. They are, indeed, vectors in a Hilbert space and provide a complete orthonormal basis - like i,j,k. But they're obviously a lot more complicated than that simple analogy suggests.

More or less, you must picture kets like vectors in Cartesian space. Even David Hilbert thought of them that way. But be aware that intuitive picture can be misleading when you get into more advanced topics.

Physicists avoid the term "dot product" favoring "inner product" or "scalar product". They want to emphasize that it's different from the simple dot product in Cartesian (real) spaces. But mathematicians (like myself) use the terms interchangeably. Here at PF let's not call it "dot" - when in Rome, do as the Romans do.

Last edited: Apr 10, 2016
12. Apr 10, 2016

### vanhees71

As so often in this forum, I cannot understand, why people like this book so much. Reading just this example tells me, he's often making things more "mysterious" than they are.

The point is that you have a complete orthonormal system (CONS), i.e., a set of vectors normalized to 1 and perpendicular to each other, i.e.,
$$\langle e_n | e_m \rangle=\delta_{nm}$$
fulfilling the completeness relation
$$\sum_{n=1}^{\infty} |e_n \rangle \langle e_n|=\hat{1}.$$
Now it's very simple to derive the equation Griffiths is after. He considers a ket $|\alpha \rangle$ that is mapped by a linear operator $\hat{Q}$ to $|\beta \rangle$:
$$|\beta \rangle=\hat{Q} |\alpha \rangle.$$
Now, because of the completeness relation
$$|\alpha \rangle=\sum_{n=1}^{\infty} |e_n \rangle \langle e_n|\alpha \rangle.$$
Defining
$$a_n=\langle e_n|\alpha \rangle$$
you have
$$|\alpha \rangle = \sum_{n=1}^{\infty} a_n |e_n \rangle.$$
Now apply $\hat{Q}$ to this equation and use that it's a linear operator:
$$|\beta \rangle = \hat{Q} |\alpha \rangle = \sum_{n=1}^{\infty} a_n \hat{Q}|e_n \rangle.$$
Now we want to express also $|\beta \rangle$ in terms of its components wrt. the basis. We know the recipe! We just need to introduce another completeness relation. We only have to use another summation index:
$$|\beta \rangle = \sum_{n,m=1}^{\infty} a_n |e_m \rangle \langle e_m|\hat{Q}|e_n \rangle = \sum_{n,m=1}^{\infty} e_m \rangle Q_{mn} a_n,$$
i.e.,
$$b_m=\sum_{n=1}^{\infty} Q_{mn} a_n.$$
It's the great strength of the Dirac notation that such manipulations are so easy to derive. The notation is so clever that it's almost fool proved. I remember that I had some difficulties in the beginning of my university studies to know, where which basis vectors have to be used and what's the matrix decomposition of a linear operator in linear algebra. The Dirac notation was the relief. You can't do too much wrong after some time of getting used to it!

13. Apr 11, 2016

### Kara386

Wow. Thank you for such brilliant answers! I really appreciate everyone's help. I understand the notation much better, I hope!