Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Dirac Notation.

  1. Jun 9, 2009 #1
    Hello, I'm fuzzy on how Dirac notation works especially when operators are added in. Does anyone have a clear explanation (the simpler the better) that they can give to me, and or a website or book that does a good job of explaining it?
  2. jcsd
  3. Jun 9, 2009 #2
    Any wave function psi, can be represented in dirac notation as a ket written as |psi>. Operators act on these kets in the same way they would act on a normal wave function.
    let H be hamiltonian operator, then the eigenvalue equation is,
    Hpsi = Epsi, where psi is an eigenfunction of the hamiltonian.
    In dirac notation, this is written as,
    H|psi> = E|psi>
    Alternatively, sometimes psi is not explicitely written inside the ket. Sometimes the i-th eigenfunction is simply written as |i>. For example, the first eigenfunction of the SHO is sometimes written as |0> and the next eigenfuctnion is sometimes written as |1> etc.

    Any wave function has its complex conjugate, such as psi*, where * indicates complex conjugate. In dirac notation, the complex conjugate of a wavefunction is written as a bra vector which looks like <psi|.

    When a bra vector and a ket vector are written down as <i|j> for example, it is read as though it is the complex conjugate of the wavefunction i times the wavefunction j integrated over all space. That is, when a bra and ket are written down as one, it means the author has intended not just a multiplication but also an integral over all space.

    The above expression means that the operator Q acts on the wave function |j>, then once this result is obtained, is multiplied by <i| and integrated over all space.
  4. Jun 9, 2009 #3
    so it is basically another way to write matrix operations?
  5. Jun 9, 2009 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    A ket is a member of a complex Hilbert space H, i.e. it's just a vector written in a funny way. A bra is a member of H*, the dual space of H. H* is defined as the set of all bounded linear functionals on H, with addition of vectors and multiplication by a scalar defined in the obvious way:


    These definitions give H* the structure of a vector space.

    A functional [itex]f:H\rightarrow\mathbb C[/itex] is said to be bounded if there exists a real number M such that [itex]|fx|\leq M\|x\|[/itex] for all x in H. Note that I'm using the notational convention that says that we write fx instead of f(x) when f is linear. It's pretty easy to show that a linear functional (any linear operator actually) is bounded if and only if it's continuous. (Link). So H* can be defined equivalently as the set of all continuous linear functionals on H.

    Let's write the inner product of x and y as (x,y). The physicist's convention is to let the inner product be linear in the second variable and antilinear in the first. The Riesz representation theorem (which is easy to prove (link) if you know the projection theorem already) says that for each f in H, there's a unique x0 in H* such that [itex]f(x)=(x_0,x)[/itex] for all x, and that this x0 satisfies [itex]\|x_0\|=\|f\|[/itex]. The norm on the right is the operator norm, defined by [itex]\|f\|=\sup_{\|x\|=1}\|fx\|[/itex]. The map [itex]f\mapsto x_0[/itex] is a bijection from H* into H, so there's exactly one bra for each ket, and vice versa. It's not a vector space isomorphism though, because it's antilinear rather than linear, as you can easily verify for youself. (A function [itex]T:U\rightarrow V[/itex], where U and V are complex vector spaces, is said to be antilinear if [itex]T(ax+by)=a^*Tx+b^*Ty[/itex], for alla vectors x,y and all complex numbers a,b).

    We can use this antilinear bijection to define an inner product on H*. Let x' and y' be the bras that corresponds to the kets x and y respectively (via the bijection mentioned above). We define (x',y')=(x,y). This definition gives H* the structure of a Hilbert space, and ensures that the antilinear bijection we defined preserves distances between points. The norm on H* defined by the inner product is consistent with the operator norm that we used before, because


    where the norm on the left is the one defined by the inner product, and the one one the right is the operator norm. The last equality follows from the Riesz theorem, as mentioned above.

    So far I've been writing the kets as x,y, etc. From now on I'll write them as [itex]|\alpha\rangle,\ |\beta\rangle[/itex], etc. The bra in H* that corresponds to the ket [itex]|\alpha\rangle[/itex] (via the antilinear bijection mentioned above) is written as [itex]\langle\alpha|[/itex]. Note that we have


    The first equality is what we get from the Riesz theorem. The second is the notational convention for linear functions that I mentioned above. The third is another notational convention that I haven't explained yet. We just drop one of the vertical lines to make it look nicer.

    Note that the right-hand side isn't the scalar product of [itex]\alpha[/itex] and [itex]\beta[/itex] (those symbols aren't even defined) or a "scalar product" of the bra [itex]\langle\alpha|[/itex] with the ket [itex]|\beta\rangle[/itex] (that concept hasn't been defined). It's the scalar product of the kets [itex]|\alpha\rangle[/itex] and [itex]|\beta\rangle[/itex], or equivalently, the bra [itex]\langle\alpha|[/itex] acting on the ket [itex]|\beta\rangle[/itex].

    Everything else is defined to make it look like we're just multiplying things together with an associative multiplication operation. For example, the expression [itex]|\alpha\rangle\langle\alpha|[/itex] is defined as the operator that takes an arbitrary ket [itex]|\beta\rangle[/itex] to the ket [itex]\langle\alpha|\beta\rangle|\alpha\rangle[/itex]. This definition can be expressed as


    if we allow ourselves to write the scalars on the right. The convention is of course to allow that, so we would write both the left-hand side and the right-hand side of this equation as [itex]|\alpha\rangle\langle\alpha|\beta\rangle[/itex].

    Here's an easy exercise: Define the expression [itex]\langle\alpha|A[/itex], where A is an operator, in a way that's consistent with what I just said.

    Note that nothing I have said so far tells you how to make sense of expressions such as

    [tex]\int da|a\rangle\langle a|=1[/tex]​

    which includes "eigenvectors" of an operator that doesn't have any eigenvectors. I still don't fully understand how to make sense of those myself, but I'm working on it. A full understanding includes knowledge about how to prove at least one of the relevant spectral theorems. This is the sort of stuff that you might see near the end of a 300-page book on functional analysis.
    Last edited: Jun 10, 2009
  6. Jun 9, 2009 #5


    User Avatar
    Science Advisor
    Homework Helper

    Sakurai is good on this
  7. Jun 9, 2009 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Sakurai does a good job of teaching you how to use bra-ket notation, but it's pretty bad if you want definitions. As I recall, it doesn't even define the dual space. You can read Sakurai and never realize that a bra is a functional on the Hilbert space of kets.
  8. Jun 9, 2009 #7


    User Avatar
    Science Advisor
    Homework Helper

    Yes, but the OP asked "I'm fuzzy on how Dirac notation works", thus he wants to use it :-)
  9. Jun 9, 2009 #8
    Ballentine does better than Sukurai on this in my opinion.
  10. Jun 9, 2009 #9
  11. Jun 9, 2009 #10
    Last edited by a moderator: May 4, 2017
  12. Jun 10, 2009 #11


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    What I said about Sakurai in #6 applies even more to Dirac. If you only care about how to use the notation, then Dirac's explanation is great, but he makes unnecessary assumptions and it's not even clear that they can be justified. In addition to that, his definitions are sloppy.

    I have added a few more details to #4. This is the stuff that I wish someone had explained to me when I was studying Sakurai.
  13. Jun 10, 2009 #12
    The lectures were actually very good although i have not finished them all yet. I tried looking into the sakuri book but it seems as if they are very heavily math oriented and require a very good mathematics background which i don't believe i posses.
  14. Jun 10, 2009 #13


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You really don't need to understand anything more than the concepts "vector space", "complex number" and "linear operator" to read the part of Sakurai that explains the bra-ket notation. To understand the first few chapters, you also have to understand the concepts "basis", "eigenvalue" and "eigenvector". These are all concepts from an introductory course on linear algebra, and he still explains them in the book. Sakurai is very far from math oriented in my opinion. Certainly much less math oriented than my post #4. However, if you feel that way, maybe Dirac is better for you. Don't bother with Ballentine. It's a better book, but it requires a higher level of mathematical maturity than Sakurai.
  15. Jun 10, 2009 #14
    I agree Ballentine does require more mathematical maturity. However, regardless of what you do, I suggest everyone at least has a look at it at some time. I find it to be excellent.
  16. Jun 10, 2009 #15
    We have to keep in mind that functional analysis and the theory of distribution was motivated in part by physics. So, it can be a little unnatural to first study the theory of Hilbert and Banach spaces and the theory of distributions and then learn quantum mechancs, because then you don't learn how a physicist really thinks.

    In physics, you use whatever ad hoc and ill defined formalism that appears to work for your problem, and only later do you try to make the formalism more rigorous (but usually you leave that to the mathematicians).
  17. Jun 10, 2009 #16


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I completely agree with that. We like that book here at Physics Forums. See e.g. this thread in the science book forum.
    Last edited: Jun 10, 2009
  18. Jun 10, 2009 #17
    oo well i'm actually only a sophomore in high school but i have been trying as hard as i can to expand my knowledge in science as well as math. Ive studied, what i believe to be enough calc to easily get through an honors level calc class (i don't know about AP though) and i have a very basic understanding of trigonometry i haven't studied it a great deal. i also probably have a good enough understanding of kinetic physics to get me through a physics course but very little knowledge on electrical theories such as Maxwells equations. Additionally, i have tried with very little success to understand vectors beyond the simple facts of adding and subtracting them. The same goes for linear algebra and matrices, (although matrices i think i get much better then linear algebra as well and vectors). This is were my problem arose when trying to read about dirac notation in a quantum physics book, i had a great deal of interest and bought what seemed to be the simplest book to give me a decent understanding of some mathematics as well as concepts (quantum physics for dummies) however i failed as soon as i got to dirac's barket notation. I simply didn't understand what it was doing where it came from how it worked and how to apply operators to it.
  19. Jun 10, 2009 #18
    actually im not sure if this is allowed (so stop me if it inst) but is it ok if i ask some questions i had on things like linear alg, and matrices ect. on this thread or do i have to make a completely new one on the new subject?
  20. Jun 10, 2009 #19
    It may be better to do it here because then everyone knows your background.
  21. Jun 10, 2009 #20
    Something like "Linear Algebra Demystified" might be a good start. Or any introductory text. I'd pay particular attention to discussions of vector spaces and inner product spaces
  22. Jun 10, 2009 #21
    ok well my first question is on vectors. where does the formula C^2=A^2+B^2-2AB*cos(<OPQ)(thats supposed to be an angle) come from? it came up in shcaum's outline of vector analysis when i was trying to figure out how to add vectors. I keep looking at it and i cant figure out how its derived or how it gives and answer, also what does the cos of an angle that isn't in a right triangle mean? This is the only question that i can think of right off the bat sitting here but i will go through my books again and familiarize myself with some of the material as well as other questions i had in detail and ask them later.
  23. Jun 10, 2009 #22
    ill also try finding linear algebra demystified, i think that the demystified series is a very good one, especially since they add solved problems at the end of each section.
  24. Jun 10, 2009 #23
    Well the specific result you mentioned is actually just the cosine law. But you can simply get it by recognizing that A dot B = |A||B|cos(theta)
  25. Jun 10, 2009 #24
    is though dirac matrix algebra that expounded in Schouten?
  26. Jun 11, 2009 #25


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You're definitely going to have to learn the basics of linear algebra (the mathematics of linear operators between finite vector spaces) if you're going to understand quantum mechanics at all. Linear algebra is actually not the math that's needed for a rigorous treatement of QM, but if you understand linear algebra really well, you will at least have the right intuition about how to deal with vectors and operators.

    The math that's needed for a rigorous treatment is called functional analysis. It's the math of linear operators between vector spaces that are equipped with an inner product (or at least a norm) such that all Cauchy sequences are convergent. So it's basically linear algebra generalized to a class of vector spaces that may be infinite-dimensional. Linear algebra is the easiest part of college-level math. Functional analysis is the hardest. Most physicists never study functional analysis. But they do study linear algebra, because it's the absolute minimum you have to do to at least get some intuition about what you're doing in QM.

    I actually didn't even recognize that formula at first, because I haven't used it since my first year at the university, at least not in that form. I have however had to use the results

    [tex]\|\vec A-\vec B\|^2=\|\vec A\|^2+\|\vec B\|^2-2\vec A\cdot\vec B[/tex]


    [tex]\vec A\cdot\vec B=\|\vec A\|\|\vec B\|\cos\theta[/tex]

    a lot. (Here [itex]\theta[/itex] is the angle between [itex]\vec A[/itex] and [itex]\vec B[/itex]). The Wikipedia article on the law of cosines explains it really well. I recommend you take a look at several different proofs. In particular, I suggest the proof using the distance formula, and then you make sure you understand the section titled vector formulation, because it explains the second of the equalities above, which is more important than the cosine law itself.

    See the Wikipedia article unit circle, in particular the image at the upper right. cos is defined by that image, for arbitrary angles. You can also check out the cosine article if you want more information.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook