Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I What's the motivation for bracket notation in QM?

  1. Aug 10, 2016 #1

    gulfcoastfella

    User Avatar
    Gold Member

    I took a semester of QM as an undergrad engineering major, and I don't recall the motivation for replacing traditional vector notation with bracket notation. Can someone enlighten me? Thank you.
     
  2. jcsd
  3. Aug 10, 2016 #2
    Dirac used an alternative notation for inner products of wave functions/state functions that leads to the concepts of bras and kets.

    The notation is sometimes more efficient than the conventional mathematical notation

    It all begins by writing the inner product differently. The rule is to turn inner products into bra-ket pairs as follows

    ( u , v ) −→ (u| v).

    Instead of the inner product comma we can simply put a vertical bar!
    the notation is convenient .....
     
  4. Aug 10, 2016 #3

    dextercioby

    User Avatar
    Science Advisor
    Homework Helper

    The brilliant invention of Paul Dirac (if I am not mistaking, this appears in the 1935 edition of his textbook) has the advantage of teaching people (such as you) Quantum Mechanics without worrying about the pesky details of Functional Analysis, which had been shown by the great John von Neumann to be the underlying mathematical theory of Quantum Mechanics.
     
    Last edited: Aug 11, 2016
  5. Aug 10, 2016 #4

    gulfcoastfella

    User Avatar
    Gold Member

    Thanks for the replies.
     
  6. Aug 11, 2016 #5

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    And the good news is that Dirac got it even almost mathematically rigorous. One has only to formalize it a bit. That's what's nowadays known under the name "rigged Hilbert space". Of course, you have to learn functional analysis, if you want QM mathematically rigorous. It's not necessary in all details for practitioners (physicists) of QM, but it also doesn't hurt to read a bit into it since it can help to understand better some finer details about the continuous spectra ("eigenvalues") of the self-adjoint and unitary operators, occurring in the formalism of quantum theory.

    Note, however, that all this is not yet successful for relativistic quantum theory, i.e., relativistic quantum field theory. There is no mathematically fully rigorous formulation of relativistic QFT for realistic cases (i.e., the Standard Model of elementary particle physics). Despite this lack of rigor, with some right you can say it's the most accurate physical theory ever. It's very persistent despite the fact that high-energy physicists eagerly look for deviations from or Physics beyon the Standard Model, because they would like to figure out, what may be an even better and more comprehensive theory than the Standard Model.
     
  7. Aug 12, 2016 #6

    gulfcoastfella

    User Avatar
    Gold Member

    Thanks for the in-depth reply, vanhees. Plenty of ideas for further reading.
     
  8. Aug 13, 2016 #7

    lavinia

    User Avatar
    Science Advisor

    Not sure what Dirac as thinking but bras and kets denote quantum states, not just vectors in a vector space. The notation allows you to denote "the quantum state with wave function ##ψ## by the simple notation ##|ψ>##.

    In Leonard Susskind's Lectures (on Youtube), he makes a strong point that quantum state space is intrinsically different than classical state space. Classical state space is just a parameter domain. Quantum state space is a vector space. Points in it can be linearly combined to produce other states. The Dirac notation is a way to indicate this.

    A bra vector may be thought of as the state whose wave function is the conjugate of the corresponding ket vector. Writing it as a bra makes it easy to express the Hermitian inner product. In QM inner products are thought of as projections of one state onto another.

    There is also a notational convenience when one wants to include operators. If ##A## operates on the ket ##|ψ>## to give the new ket ##A|ψ>## then the inner product with the bra ##<φ|## is just ##<φ|A|ψ>##
     
    Last edited: Aug 22, 2016
  9. Aug 13, 2016 #8

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I'm a bit picky on that. A Hilbert-space vector ##|\psi \rangle## represents a state, but the state itself is given as the complete ray or equivalently a projector as the Statistical operator ##\hat{\rho}_{\psi}=|\psi \rangle \langle \psi|##. These are the pure states. A general state is given by a self-adjoint positive semidefinite operator with trace 1, the Statistical operator ##\hat{\rho}##.
     
  10. Aug 13, 2016 #9

    Strilanc

    User Avatar
    Science Advisor

    Honestly I'm surprised that ket notation hasn't swept over all of linear algebra. It's a great tool for understanding.

    For example, I always had trouble doing matrix multiplication by hand. "Is it row col or col row or... which dimensions have to match again?" But with kets that becomes downright trivial.

    ##|a\rangle \langle b|## is a transformation from ##a## to ##b## (or vice versa), and we can break any matrix ##M## into a sum of these single-value transformations ##M = \sum_{i,j} |i\rangle\langle j| M_{i,j}##. Correspondingly, ##\langle a | b \rangle## is a comparison (dot-product) between ##a## and ##b##. Our matrix breakdown has the property that ##\langle a | a \rangle = 1## while ##\langle a | b \rangle = 0## for ##b \neq a## so matrix multiplication is just...

    ##M \cdot N##
    expand
    ##= \left(\sum_{i,j} M_{i,j} |i\rangle \langle j| \right) \left(\sum_{k,l} N_{k,l} |k\rangle \langle l | \right)##
    combine
    ##= \sum_{i,j,k,l} M_{i,j} N_{k,l} |i\rangle \langle j| |k\rangle \langle l |##
    The ##\langle j| |k\rangle## term discards all parts of the sum where ##j \neq k##:
    ##= \sum_{i,j,l} M_{i,j} N_{j,l} |i\rangle \langle l |##
    group
    ##= \sum_{i,l} |i\rangle \langle l | \sum_j M_{i,j} N_{j,l}##
    meaning
    ##(M \cdot N)_{i,l} = \sum_j M_{i,j} N_{j,l}##


    The matrix multiplication definition falls right out of multiplying the ket breakdowns.
     
  11. Aug 13, 2016 #10

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    It took the efforts of 3 of the greatest mathematicians of the 20th century - Grothendieck, Schwartz and Gelfland (there were others as well).

    But it's now the mainstay of much of applied mathematics. In fact distribution theory should be in the armory of any applied mathematician:
    https://www.amazon.com/Theory-Distributions-Nontechnical-Introduction/dp/0521558905

    It makes Fourier transformations a snap that would otherwise become bogged down with technical issues of convergence - its really the only way to do it IMHO. But that is just one area - it has enriched many areas eg white noise analysis:
    http://www.asiapacific-mathnews.com/04/0404/0010_0013.pdf

    Thanks
    Bill
     
    Last edited by a moderator: May 8, 2017
  12. Aug 14, 2016 #11

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Another great book on Fourier transformations and distributions I liked very much when I learnt the subject is

    M. J. Lighthill, Introduction to Fourier analysis and generalised functions, Cambridge university press (1959)
     
  13. Aug 14, 2016 #12

    lavinia

    User Avatar
    Science Advisor

    - You use an inner product in order to eliminate the terms ##<j.k>## when ##j ≠ k##. Presumably if you change basis then you must redefine the inner product since these new bases may not be orthonormal. While this may help you to keep track of indices, it is conceptually complicated.

    - Generally linear transformations are mappings between different vector spaces. By using only indices, your notation suggests that the ##|i>## ##|j>## ##|l>## and ##|k>## 's are the same basis in a single vector space.

    - Once one has the proof that a linear transformation can be represented by a matrix with respect to a choice of bases, the proof that the composition of two linear transformations is the product of the two matrices falls out without reference to inner products. Inner products are an added structure imposed upon a vector space and are conceptually distinct from the idea of linear transformations themselves.
     
  14. Aug 14, 2016 #13

    Strilanc

    User Avatar
    Science Advisor

    Why would I, half-way through a matrix product, change the basis that I write matrices in? The same issue applies to the grid of numbers: if the numbers don't mean the same thing within each matrix, then you can't compose the two matrices via normal matrix multiplication.

    Yup. In quantum computing we call it the computational basis. It's arbitrary, but good for making the math succinct.

    Well, I did use an inner product in the proof. But you can justify all the logic with linearity and the definition ##M=|a\rangle\langle b| \implies rank(M) = 1 \land M |b\rangle = | a \rangle \land \langle a|M = \langle b|## instead.
     
  15. Aug 14, 2016 #14

    lavinia

    User Avatar
    Science Advisor

    I was just saying that in linear algebra a matrix representation of a linear map exists for any choice of bases. Even if you have an inner product, which generally you do not, arbitrary bases may not be orthonormal. So conceptually using an orthonormal basis implies that a change of basis would require defining a new inner product. This perhaps is one reason why these ideas are not universal in all applications of linear algebra.
     
  16. Aug 18, 2016 #15
    I suppose the main advantage of braket notation <a|b> over the parenthesis notation (a,b) is that it clearly distinguishes between a "bra" vector <a| and a "ket" vector |b>. Nobody would write "(a," and ",b)" because it just messes too much with normal typography.

    I suppose one could use index notation to distinguish them: ##a^i##, ##b_i##. I think index notation would've been nice, but history chose Dirac's notation. Dirac's notation also lets you put multiple characters in the label, like state |3,2,1> and <4,3,2|.
     
  17. Aug 19, 2016 #16

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    No! ##a^i## and ##b_i## are vector/covector components, not vectors, while ##|a \rangle## are vectors, independent of a basis.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: What's the motivation for bracket notation in QM?
  1. What are the QM (Replies: 20)

  2. QM Notation questions (Replies: 7)

Loading...