Understanding Bra-Ket Notation in Quantum Mechanics

In summary: The other ket vector is the... matrix... with the elements of |n>, that is, the list of basis vectors. This matrix is the inverse of the one with 1s on diagonal and 0s off diagonal. The elements of this inverse matrix are the basis vectors themselves! So they can be re-written as |1>, |2>, ... |N> again. So you get the sum over the basis vectors, which is just the original vector, in the original basis.The delta subscripts should be lm not ln. But, I'm not sure how it would work with
  • #1
Kara386
208
2
In my lecture notes, it says that

##\left\langle l \right| A_{nm} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| \left| n \right\rangle##
##=\sum_{n,m} A_{nm}\left\langle m \right| \left| \psi \right\rangle \delta_{ln}##
##= \sum_{m} A_{lm} \left\langle m \right| \left|\psi \right\rangle##

And the notes also state that a basis is a set of vectors {##\left| n \right\rangle##}, where n = 1,2,3,...,N. I'm not sure what that means, but presumably this is what ##\left| m \right\rangle## and ##\left| n \right\rangle## are in the summations above? If ##\left| n \right\rangle = 1,2,3...N## and these bra-ket things are like vectors, wouldn't that mean that ##\left| n \right\rangle =
\left(
\begin{array}{c}
1\\
2\\
...\\
N\\
\end{array}
\right)
##? I know that's not true. I think it's meant to be a column vector of unit vectors like the i,j,k used in Cartesian. I don't see how a column vector of unit vectors equates to {##\left| n \right\rangle##}, where n = 1,2,3,...,N.

Essentially, I don't understand the manipulation above at all. I don't know where the delta comes from or why you can write an operator ##A## as ##A = \sum_{n,m} A_{nm} \left| n \right\rangle \left\langle m \right|##, which I presume is what's happened here? I think it's meant to be obvious because there's no explanation, but it isn't obvious to me.

I'd really appreciate any help - it's quite a long question! :)
 
Last edited:
Physics news on Phys.org
  • #2
Kara386 said:
##\left\langle l \right| A_{nm} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| A_{nm} \left| n \right\rangle##
You should check again your note, it cannot be the case that an operator ##A## (left side) can double itself (right side). Moreover, I suspect that the indices ##mn## shouldn't be there because in the usual notation, superscript following an operator represents the matrix element of that operator in the basis specified by the indices. So, ##A_{mn}## will mean a number and hence if one follows this ##\langle l | A_{nm} | \psi \rangle = A_{mn}\langle l | \psi \rangle##
Kara386 said:
I'm not sure what that means, but presumably this is what |m⟩|m⟩\left| m \right\rangle and |n⟩|n⟩\left| n \right\rangle are in the summation above?
Most likely yes.
Kara386 said:
I don't know where the delta comes from
If ##\{ |n\rangle \}## are a set of orthonormal vectors, then ##\langle m|n\rangle = \delta_{mn}##.
Kara386 said:
why you can write an operator AAA as A=∑n,mAnm|n⟩⟨m|A=∑n,mAnm|n⟩⟨m|A = \sum_{n,m} A_{nm} \left| n \right\rangle \left\langle m \right|, which I presume is what's happened here?
There is the so-called completeness relation ##\sum_n |n\rangle \langle n| = 1##. Use it twice to the right as well as to the left of ##A## to obtain ##\sum_n\sum_m |n\rangle \langle n|A|m\rangle \langle m| = \sum_n\sum_m A_{mn} |n\rangle \langle m| ## where ##A_{mn}= \langle n|A|m\rangle ##.

Judging from your questions, I am afraid that your issue lies in the basic stage in QM. In this case, it wouldn't be effective to seek the answers to individual questions which may actually not constitute a correct sequence of teaching in the pedagogical aspect of the subject. Therefore, I suggest that you read some introductory books in QM and see how far you can get and come back here in case you encounter a problem.
 
Last edited:
  • Like
Likes Kara386 and vanhees71
  • #3
blue_leaf77 said:
You should check again your note, it cannot be the case that an operator ##A## (left side) can double itself (right side).
Quite right, a typo from copying and pasting all the latex stuff. I took it out. The indices are definitely there though.

blue_leaf77 said:
If ##\{ |n\rangle \}## are a set of orthonormal vectors, then ##\langle m|n\rangle = \delta_{mn}##.
Is that if ##\langle m|## is also a set of orthonormal vectors, or just any vectors?

blue_leaf77 said:
Judging from your questions, I am afraid that your issue lies in the basic stage in QM.
Yes, I thought so. The lecture notes are meant to be the basic introduction, but it clearly isn't working. I'll have a look in some textbooks. Thanks!
 
  • #4
Kara386 said:
Quite right, a typo from copying and pasting all the latex stuff. I took it out. The indices are definitely there though.
Then I am pretty sure that the ##A## in the left side of ##
\left\langle l \right| A_{mn} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| \left| n \right\rangle## should not have subscripted indices. In this case, the following derivation will make sense
$$
\left\langle l \right| A \left| \psi \right\rangle
=\langle l| \sum_n |n\rangle \langle n|A \sum_m |m\rangle \langle m| \psi\rangle = \sum_n\sum_m A_{mn} \langle l |n\rangle \langle m| \psi \rangle
$$
with ##A_{mn} = \langle n|A|m\rangle##.
Kara386 said:
Is that if ⟨m|⟨m|\langle m| is also a set of orthonormal vectors, or just any vectors?
##\langle m |## is a complex conjugate of ##|m\rangle## which is also an element of the set ##\{|n\rangle\}##.
 
Last edited:
  • Like
Likes Kara386 and vanhees71
  • #5
n = 1, 2, ... N, it's just an integer index. |n> = (represents, or indexes) a set of basis vectors |1>, |2>, ... |N>. They behave like i, j , k from cartesian coordinates, and can be any (complete) set of orthogonal vectors. Like, in cartesian space you know you can rotate the 3 axes, to get a different (complete) set of (orthogonal) basis vectors. Same applies here in Hilbert Space. But now there are (possibly) infinite list of basis vectors (although here we're dealing only with finite list, up to N); and rotation is done by a unitary matrix. n and m both refer to the same list of basis vectors.

So when you see summing over |n>, that means substitute |1>, then |2>, etc into the expression and sum all the results. Essentially you're taking the dot product of the item on the other side of the bracket with each basis vector one after the other, to get the coefficients of that item in this basis.

there is an N x N matrix where each column (or, row) is one of the basis vectors, as you mention - the same way i,j,k can be put in a single 3 x 3 matrix. The N x N matrix would be the matrix used to rotate a vector to this basis ... skipping details.

The delta subscripted by ln is 1 if l=n and 0 otherwise. Since these are all unit basis vectors (like i,j,k) this delta represents just the dot product. You know that (i,i) (traditional dot product) is 1 but (i,j) is 0. Same idea.

So what delta does, as it's multiplied in a summation over n and m, is keep only the terms where n=l and throw all the others away. That's why the n disappears in the final summation, an it's been replaced by l in A's subscript list.

Hope that helps. In trying to put it simply there may be an inaccuracy or two - for instance I'm ignoring bra vs. ket, and probably the term "dot product" could be objected to; but this is the basic idea.
 
  • Like
Likes Kara386
  • #6
blue_leaf77 said:
Then I am pretty sure that the ##A## in the left side of ##
\left\langle l \right| A_{mn} \left| \psi \right\rangle = \sum_{n,m} A_{nm} \left\langle m \right| \left|\psi \right\rangle \left\langle l \right| \left| n \right\rangle## should not have subscripted indices. In this case, the following derivation will make sense
$$
\left\langle l \right| A \left| \psi \right\rangle
=\langle l| \sum_n |n\rangle \langle n|A \sum_m |m\rangle \langle m| \psi\rangle = \sum_n\sum_m A_{mn} \langle l |n\rangle \langle m| \psi \rangle
$$
with ##A_{mn} = \langle n|A|m\rangle##.

##\langle m |## is a complex conjugate of ##|m\rangle## which is also an element of the set ##\{|n\rangle\}##.

OK, I've been reading a book by Griffiths on QM, and I think I have a much better idea of what's going on. But there's a line in one of the calculations which states
##\sum_n \left\langle e_m \right| \left| e_n \right\rangle = 1##
##e_m## and ##e_n## are basis somethings. Vectors maybe. Is that line true because the basis is normalised? The book doesn't explicitly say so, but I think that's the only way it's true? If the inner product is like the dot product, wouldn't an orthonormal basis have an inner product of zero except when m=n, when it would be one?
 
  • #7
The formula doesn't make much sense in the formalism of quantum theory. Nevertheless it's correct, supposed ##|e_n \rangle## are a set of orthonormalized vectors, because then by definition
$$\langle e_m|e_n \rangle=\delta_{mn}=\begin{cases} 1 & \text{if} \quad m=n, \\ 0 & \text{if} \quad m \neq {n}. \end{cases}$$
Then of course the sum over ##n## is ##1##, supposed it runs also over ##m##.

I only don't see, where you could ever need this formula in quantum theory...
 
  • Like
Likes Kara386
  • #8
vanhees71 said:
The formula doesn't make much sense in the formalism of quantum theory. Nevertheless it's correct, supposed ##|e_n \rangle## are a set of orthonormalized vectors, because then by definition
$$\langle e_m|e_n \rangle=\delta_{mn}=\begin{cases} 1 & \text{if} \quad m=n, \\ 0 & \text{if} \quad m \neq {n}. \end{cases}$$
Then of course the sum over ##n## is ##1##, supposed it runs also over ##m##.

I only don't see, where you could ever need this formula in quantum theory...

I suppose it might not end up actually being used. It was a chapter dirac notation, so was probably included for completeness more than anything else. Specifically this line came up while it was being demonstrated that operators are represented by their matrix elements in some particular basis.
 
  • #9
For context, could you state where in Griffiths (page number and/or equation number) this occurs?
 
  • #10
Chapter
George Jones said:
For context, could you state where in Griffiths (page number and/or equation number) this occurs?
Chapter 3, page 120, the step from Equation 3.83 to 3.84 suggests the relation I queried. It's a fantastic book, and I'm sure he stated somewhere earlier about the basis being orthonormal, but I'd forgotten by the time I got to 3.83.
 
  • #11
Kara386 said:
basis somethings. Vectors maybe.
Kara386 said:
If the inner product is like the dot product

Let me address these terminology issues.

A "ket", whether part of a basis or not, is, in fact, a vector in Hilbert Space. As such it's similar to i,j or k in Cartesian space (or linear combinations thereof, like 3i-2j+k). But there are some big differences.

Kets have complex coefficients, not real; that makes their behavior very counterintuitive sometimes. Only their direction matters, so they're always normalized to 1. Actually, often we don't bother to normalize them (by dividing by the square root of 2, for instance); instead it's understood; so that can add to the confusion. Since Hilbert space can have infinite dimensions, the basis can be continuous. In that case instead of summing, you have to integrate. So the integral can diverge to infinity if you're not careful. Also, such continuous bases can't easily be "normalized to 1". Finally, unlike Cartesian space we can't casually assume a pre-existing basis like i,j,k; we have to create it "from scratch" using the eigenvectors of an observable.

For instance the electron orbits of a hydrogen atom (s, p, d etc) are the eigenvectors of the energy Hamiltonian. They are, indeed, vectors in a Hilbert space and provide a complete orthonormal basis - like i,j,k. But they're obviously a lot more complicated than that simple analogy suggests.

More or less, you must picture kets like vectors in Cartesian space. Even David Hilbert thought of them that way. But be aware that intuitive picture can be misleading when you get into more advanced topics.

Physicists avoid the term "dot product" favoring "inner product" or "scalar product". They want to emphasize that it's different from the simple dot product in Cartesian (real) spaces. But mathematicians (like myself) use the terms interchangeably. Here at PF let's not call it "dot" - when in Rome, do as the Romans do.
 
Last edited:
  • Like
Likes Kara386
  • #12
As so often in this forum, I cannot understand, why people like this book so much. Reading just this example tells me, he's often making things more "mysterious" than they are.

The point is that you have a complete orthonormal system (CONS), i.e., a set of vectors normalized to 1 and perpendicular to each other, i.e.,
$$\langle e_n | e_m \rangle=\delta_{nm}$$
fulfilling the completeness relation
$$\sum_{n=1}^{\infty} |e_n \rangle \langle e_n|=\hat{1}.$$
Now it's very simple to derive the equation Griffiths is after. He considers a ket ##|\alpha \rangle## that is mapped by a linear operator ##\hat{Q}## to ##|\beta \rangle##:
$$|\beta \rangle=\hat{Q} |\alpha \rangle.$$
Now, because of the completeness relation
$$|\alpha \rangle=\sum_{n=1}^{\infty} |e_n \rangle \langle e_n|\alpha \rangle.$$
Defining
$$a_n=\langle e_n|\alpha \rangle$$
you have
$$|\alpha \rangle = \sum_{n=1}^{\infty} a_n |e_n \rangle.$$
Now apply ##\hat{Q}## to this equation and use that it's a linear operator:
$$|\beta \rangle = \hat{Q} |\alpha \rangle = \sum_{n=1}^{\infty} a_n \hat{Q}|e_n \rangle.$$
Now we want to express also ##|\beta \rangle## in terms of its components wrt. the basis. We know the recipe! We just need to introduce another completeness relation. We only have to use another summation index:
$$|\beta \rangle = \sum_{n,m=1}^{\infty} a_n |e_m \rangle \langle e_m|\hat{Q}|e_n \rangle = \sum_{n,m=1}^{\infty} e_m \rangle Q_{mn} a_n,$$
i.e.,
$$b_m=\sum_{n=1}^{\infty} Q_{mn} a_n.$$
It's the great strength of the Dirac notation that such manipulations are so easy to derive. The notation is so clever that it's almost fool proved. I remember that I had some difficulties in the beginning of my university studies to know, where which basis vectors have to be used and what's the matrix decomposition of a linear operator in linear algebra. The Dirac notation was the relief. You can't do too much wrong after some time of getting used to it!
 
  • Like
Likes Kara386
  • #13
Wow. Thank you for such brilliant answers! I really appreciate everyone's help. I understand the notation much better, I hope!
 

1. What is bra-ket notation in quantum mechanics?

Bra-ket notation, also known as Dirac notation, is a mathematical notation used to denote quantum states and operators in quantum mechanics. It consists of a bra vector, denoted as <A|, representing the conjugate transpose of the ket vector |A>, which represents a quantum state. The notation is named after physicist Paul Dirac, who introduced it in his work on quantum mechanics.

2. How is bra-ket notation used in quantum mechanics?

In quantum mechanics, bra-ket notation is used to represent quantum states, operators, and their mathematical operations. It allows for a more concise and elegant representation of complex quantum systems, making it easier for scientists to perform calculations and analyze data. It is also used to describe the evolution of quantum systems over time, through the use of time-dependent operators.

3. What are some advantages of using bra-ket notation?

Bra-ket notation offers several advantages in quantum mechanics. It allows for a more compact and intuitive representation of quantum systems, making it easier for scientists to perform calculations and interpret results. Additionally, it is a highly versatile notation that can be used to describe a wide range of quantum phenomena, including superposition, entanglement, and interference.

4. How do I perform mathematical operations using bra-ket notation?

To perform mathematical operations using bra-ket notation, you can use the standard rules of linear algebra. For example, to perform addition, you simply add the corresponding bra and ket vectors. To perform multiplication, you use the inner product, denoted as <A|B>, which is equivalent to the dot product in linear algebra. It is important to note that bra-ket notation follows certain rules and conventions, so it is important to familiarize yourself with these before performing operations.

5. Is bra-ket notation the only notation used in quantum mechanics?

No, there are other notations used in quantum mechanics, such as matrix notation and wave function notation. However, bra-ket notation is widely used and preferred by many scientists due to its simplicity and versatility. It is also the standard notation used in many textbooks and research papers, making it important for scientists to understand and use effectively.

Similar threads

Replies
1
Views
953
Replies
11
Views
1K
Replies
14
Views
1K
Replies
21
Views
2K
  • Quantum Physics
Replies
2
Views
957
  • Quantum Physics
Replies
7
Views
815
Replies
3
Views
791
  • Quantum Physics
Replies
31
Views
1K
  • Quantum Physics
Replies
9
Views
939
Replies
10
Views
1K
Back
Top