What's the motivation for bracket notation in QM?

In summary: This makes matrix multiplication a linear operation. In particular, it is commutative, associative, and distributive.This is a great summary. In summary, the notation allows for easy expression of the inner product between two quantum states, formalizes the notion of a state being a collection of single-valued transformation, and makes matrix multiplication a linear operation.
  • #1
gulfcoastfella
Gold Member
99
1
I took a semester of QM as an undergrad engineering major, and I don't recall the motivation for replacing traditional vector notation with bracket notation. Can someone enlighten me? Thank you.
 
Physics news on Phys.org
  • #2
gulfcoastfella said:
I don't recall the motivation for replacing traditional vector notation with bracket notation. Can someone enlighten me? Thank you.

Dirac used an alternative notation for inner products of wave functions/state functions that leads to the concepts of bras and kets.

The notation is sometimes more efficient than the conventional mathematical notation

It all begins by writing the inner product differently. The rule is to turn inner products into bra-ket pairs as follows

( u , v ) −→ (u| v).

Instead of the inner product comma we can simply put a vertical bar!
the notation is convenient ...
 
  • #3
The brilliant invention of Paul Dirac (if I am not mistaking, this appears in the 1935 edition of his textbook) has the advantage of teaching people (such as you) Quantum Mechanics without worrying about the pesky details of Functional Analysis, which had been shown by the great John von Neumann to be the underlying mathematical theory of Quantum Mechanics.
 
Last edited:
  • Like
Likes gulfcoastfella, vanhees71 and bhobba
  • #4
Thanks for the replies.
 
  • #5
dextercioby said:
The brilliant invention of Paul Dirac (if I am not mistaking, this appears in the 1935 edition of his textbook) has the advantage of teaching people (such as you) Quantum Mechanics without worrying about the pesky details of Functional Analysis, which had been shown by the great John von Neumann to be the underlying mathematical theory of Quantum Mechanics.
And the good news is that Dirac got it even almost mathematically rigorous. One has only to formalize it a bit. That's what's nowadays known under the name "rigged Hilbert space". Of course, you have to learn functional analysis, if you want QM mathematically rigorous. It's not necessary in all details for practitioners (physicists) of QM, but it also doesn't hurt to read a bit into it since it can help to understand better some finer details about the continuous spectra ("eigenvalues") of the self-adjoint and unitary operators, occurring in the formalism of quantum theory.

Note, however, that all this is not yet successful for relativistic quantum theory, i.e., relativistic quantum field theory. There is no mathematically fully rigorous formulation of relativistic QFT for realistic cases (i.e., the Standard Model of elementary particle physics). Despite this lack of rigor, with some right you can say it's the most accurate physical theory ever. It's very persistent despite the fact that high-energy physicists eagerly look for deviations from or Physics beyon the Standard Model, because they would like to figure out, what may be an even better and more comprehensive theory than the Standard Model.
 
  • Like
Likes bhobba and gulfcoastfella
  • #6
Thanks for the in-depth reply, vanhees. Plenty of ideas for further reading.
 
  • #7
gulfcoastfella said:
I took a semester of QM as an undergrad engineering major, and I don't recall the motivation for replacing traditional vector notation with bracket notation. Can someone enlighten me? Thank you.

Not sure what Dirac as thinking but bras and kets denote quantum states, not just vectors in a vector space. The notation allows you to denote "the quantum state with wave function ##ψ## by the simple notation ##|ψ>##.

In Leonard Susskind's Lectures (on Youtube), he makes a strong point that quantum state space is intrinsically different than classical state space. Classical state space is just a parameter domain. Quantum state space is a vector space. Points in it can be linearly combined to produce other states. The Dirac notation is a way to indicate this.

A bra vector may be thought of as the state whose wave function is the conjugate of the corresponding ket vector. Writing it as a bra makes it easy to express the Hermitian inner product. In QM inner products are thought of as projections of one state onto another.

There is also a notational convenience when one wants to include operators. If ##A## operates on the ket ##|ψ>## to give the new ket ##A|ψ>## then the inner product with the bra ##<φ|## is just ##<φ|A|ψ>##
 
Last edited:
  • #8
I'm a bit picky on that. A Hilbert-space vector ##|\psi \rangle## represents a state, but the state itself is given as the complete ray or equivalently a projector as the Statistical operator ##\hat{\rho}_{\psi}=|\psi \rangle \langle \psi|##. These are the pure states. A general state is given by a self-adjoint positive semidefinite operator with trace 1, the Statistical operator ##\hat{\rho}##.
 
  • #9
Honestly I'm surprised that ket notation hasn't swept over all of linear algebra. It's a great tool for understanding.

For example, I always had trouble doing matrix multiplication by hand. "Is it row col or col row or... which dimensions have to match again?" But with kets that becomes downright trivial.

##|a\rangle \langle b|## is a transformation from ##a## to ##b## (or vice versa), and we can break any matrix ##M## into a sum of these single-value transformations ##M = \sum_{i,j} |i\rangle\langle j| M_{i,j}##. Correspondingly, ##\langle a | b \rangle## is a comparison (dot-product) between ##a## and ##b##. Our matrix breakdown has the property that ##\langle a | a \rangle = 1## while ##\langle a | b \rangle = 0## for ##b \neq a## so matrix multiplication is just...

##M \cdot N##
expand
##= \left(\sum_{i,j} M_{i,j} |i\rangle \langle j| \right) \left(\sum_{k,l} N_{k,l} |k\rangle \langle l | \right)##
combine
##= \sum_{i,j,k,l} M_{i,j} N_{k,l} |i\rangle \langle j| |k\rangle \langle l |##
The ##\langle j| |k\rangle## term discards all parts of the sum where ##j \neq k##:
##= \sum_{i,j,l} M_{i,j} N_{j,l} |i\rangle \langle l |##
group
##= \sum_{i,l} |i\rangle \langle l | \sum_j M_{i,j} N_{j,l}##
meaning
##(M \cdot N)_{i,l} = \sum_j M_{i,j} N_{j,l}##The matrix multiplication definition falls right out of multiplying the ket breakdowns.
 
  • Like
Likes bhobba and vanhees71
  • #10
vanhees71 said:
And the good news is that Dirac got it even almost mathematically rigorous. One has only to formalize it a bit. That's what's nowadays known under the name "rigged Hilbert space".

It took the efforts of 3 of the greatest mathematicians of the 20th century - Grothendieck, Schwartz and Gelfland (there were others as well).

But it's now the mainstay of much of applied mathematics. In fact distribution theory should be in the armory of any applied mathematician:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

It makes Fourier transformations a snap that would otherwise become bogged down with technical issues of convergence - its really the only way to do it IMHO. But that is just one area - it has enriched many areas eg white noise analysis:
http://www.asiapacific-mathnews.com/04/0404/0010_0013.pdf

Thanks
Bill
 
Last edited by a moderator:
  • #11
Another great book on Fourier transformations and distributions I liked very much when I learned the subject is

M. J. Lighthill, Introduction to Fourier analysis and generalised functions, Cambridge university press (1959)
 
  • Like
Likes bhobba and dextercioby
  • #12
Strilanc said:
Honestly I'm surprised that ket notation hasn't swept over all of linear algebra. It's a great tool for understanding.

For example, I always had trouble doing matrix multiplication by hand. "Is it row col or col row or... which dimensions have to match again?" But with kets that becomes downright trivial.

##|a\rangle \langle b|## is a transformation from ##a## to ##b## (or vice versa), and we can break any matrix ##M## into a sum of these single-value transformations ##M = \sum_{i,j} |i\rangle\langle j| M_{i,j}##. Correspondingly, ##\langle a | b \rangle## is a comparison (dot-product) between ##a## and ##b##. Our matrix breakdown has the property that ##\langle a | a \rangle = 1## while ##\langle a | b \rangle = 0## for ##b \neq a## so matrix multiplication is just...

- You use an inner product in order to eliminate the terms ##<j.k>## when ##j ≠ k##. Presumably if you change basis then you must redefine the inner product since these new bases may not be orthonormal. While this may help you to keep track of indices, it is conceptually complicated.

- Generally linear transformations are mappings between different vector spaces. By using only indices, your notation suggests that the ##|i>## ##|j>## ##|l>## and ##|k>## 's are the same basis in a single vector space.

- Once one has the proof that a linear transformation can be represented by a matrix with respect to a choice of bases, the proof that the composition of two linear transformations is the product of the two matrices falls out without reference to inner products. Inner products are an added structure imposed upon a vector space and are conceptually distinct from the idea of linear transformations themselves.
 
  • Like
Likes fresh_42
  • #13
lavinia said:
- You use an inner product in order to eliminate the terms ##<j.k>## when ##j ≠ k##. Presumably if you change basis then you must redefine the inner product since these new bases may not be orthonormal. While this may help you to keep track of indices, it is conceptually complicated.

Why would I, half-way through a matrix product, change the basis that I write matrices in? The same issue applies to the grid of numbers: if the numbers don't mean the same thing within each matrix, then you can't compose the two matrices via normal matrix multiplication.

lavinia said:
- Generally linear transformations are mappings between different vector spaces. By using only indices, your notation suggests that the ##|i>## ##|j>## ##|l>## and ##|k>## 's are the same basis in a single vector space.

Yup. In quantum computing we call it the computational basis. It's arbitrary, but good for making the math succinct.

lavinia said:
- Once one has the proof that a linear transformation can be represented by a matrix with respect to a choice of bases, the proof that the composition of two linear transformations is the product of the two matrices falls out without reference to inner products. Inner products are an added structure imposed upon a vector space and are conceptually distinct from the idea of linear transformations themselves.

Well, I did use an inner product in the proof. But you can justify all the logic with linearity and the definition ##M=|a\rangle\langle b| \implies rank(M) = 1 \land M |b\rangle = | a \rangle \land \langle a|M = \langle b|## instead.
 
  • #14
Strilanc said:
Why would I, half-way through a matrix product, change the basis that I write matrices in? The same issue applies to the grid of numbers: if the numbers don't mean the same thing within each matrix, then you can't compose the two matrices via normal matrix multiplication.

I was just saying that in linear algebra a matrix representation of a linear map exists for any choice of bases. Even if you have an inner product, which generally you do not, arbitrary bases may not be orthonormal. So conceptually using an orthonormal basis implies that a change of basis would require defining a new inner product. This perhaps is one reason why these ideas are not universal in all applications of linear algebra.
 
  • #15
I suppose the main advantage of braket notation <a|b> over the parenthesis notation (a,b) is that it clearly distinguishes between a "bra" vector <a| and a "ket" vector |b>. Nobody would write "(a," and ",b)" because it just messes too much with normal typography.

I suppose one could use index notation to distinguish them: ##a^i##, ##b_i##. I think index notation would've been nice, but history chose Dirac's notation. Dirac's notation also let's you put multiple characters in the label, like state |3,2,1> and <4,3,2|.
 
  • #16
No! ##a^i## and ##b_i## are vector/covector components, not vectors, while ##|a \rangle## are vectors, independent of a basis.
 

1. What is bracket notation in quantum mechanics?

Bracket notation, also known as Dirac notation, is a mathematical representation used in quantum mechanics to describe the state of a quantum system. It uses a combination of angle brackets and mathematical symbols to represent the state of a quantum system, making it easier to perform calculations and understand the behavior of quantum particles.

2. Why is bracket notation used in quantum mechanics?

Bracket notation is used in quantum mechanics because it provides a concise and elegant way to represent the state of a quantum system. It also allows for simple calculations and manipulations, making it a useful tool for understanding the complex behavior of quantum particles.

3. How is bracket notation different from traditional mathematical notation?

Unlike traditional mathematical notation, bracket notation uses a combination of angle brackets and mathematical symbols to represent the state of a quantum system. It also allows for the representation of abstract quantities, such as quantum states and operators, which cannot be easily expressed in traditional notation.

4. What are the components of bracket notation?

The components of bracket notation include a ket vector, represented by |>, a bra vector, represented by < |, and a dual vector, represented by < |> These vectors are used to represent the state of a quantum system, the adjoint of a state, and the dual of a state, respectively. Operators, such as Hamiltonians and observables, are also represented using bracket notation.

5. How does bracket notation relate to quantum mechanics principles?

Bracket notation is closely related to the principles of quantum mechanics, such as superposition and measurement. It allows for the representation of quantum states, which can be in a superposition of multiple states, and operators, which are used to measure and manipulate these states. Therefore, bracket notation is an essential tool for understanding and applying the principles of quantum mechanics.

Similar threads

  • Quantum Physics
3
Replies
82
Views
8K
Replies
2
Views
1K
  • Quantum Physics
Replies
1
Views
1K
Replies
5
Views
2K
  • Quantum Physics
Replies
9
Views
2K
  • Differential Geometry
Replies
20
Views
2K
  • Quantum Physics
Replies
2
Views
1K
  • Science and Math Textbooks
Replies
7
Views
2K
  • Quantum Physics
4
Replies
115
Views
6K
  • Quantum Physics
Replies
19
Views
1K
Back
Top