Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Wavefunctions and eigenkets

  1. May 10, 2014 #1
    Hi.

    I've been learning quantum mechanics from different sources and I'm starting to notice that they have really different ways of treating certain things.

    For example: in some places Griffiths works with wavefunctions, while Sakurai works using eigenkets. This confuses me. As I understand, wavefunctions are simply the coefficients of a state's expansion on the position basis. How can then a operator "act" on a wavefunction?

    Thank you for your time.
     
  2. jcsd
  3. May 10, 2014 #2

    WannabeNewton

    User Avatar
    Science Advisor

    ##A \psi(x) = \langle x |A|\psi \rangle##.
     
  4. May 10, 2014 #3

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Just to be a little bit more explicit, and cover a little more of the notation that you probably see:

    $$A\psi(x)\equiv A\left<x|\psi\right>\equiv\left<x|A|\psi\right>$$
     
  5. May 11, 2014 #4

    bhobba

    Staff: Mentor

    I think you need to become acquainted with Gleasons theroem
    http://kof.physto.se/cond_mat_page/theses/helena-master.pdf

    The fundamental thing is the observables. Gleasons theroem (plus a couple of other things, the most important of which is non contextuality) implies a positive operator of unit trace P exists such that the expected value of the observable O is trace(PO).

    By definition P is called the state of the system. States of the form |x><x| are called pure and they are the states Griffith is talking about and trace (PO) = <x|O|x>.

    Mathematically this is probably a bit advanced for you right now, but that's the reason Griffiths and books at that level can be a bit confusing if you think a bit beyond their cookbook approach. You really need a bit more mathematical depth for the subtleties to be clear.

    I suggest reading the first three chapters of Ballentine
    https://www.amazon.com/Quantum-Mechanics-A-Modern-Development/dp/9810241054

    It's mathematically advanced, and you probably won't understand it all, but you will likely get the gist of the correct approach.

    Thanks
    Bill
     
    Last edited by a moderator: May 6, 2017
  6. May 11, 2014 #5

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    There's sometimes a bit messy notation in theoretical physics. So just some remarks to the relation in the abstract Hilbert-space formulation and the wave-mechanics formulation (of non-relativistic quantum theory; in relativistic QT it doesn't make too much sense to talk about "wave functions", but that's another story).

    In the Hilbert-space formulation a (pure) state is represented by a ray, which in turn is represented by any unit-vector [itex]|\psi \rangle[/itex] contained in the ray. This unit vector is thus only determined up to a phase factor, which is irrelevant for any physically relevant quantities.

    An observable of the system is represented by a self-adjoint operator on Hilbert space. To define the wave function one needs a complete set of compatible obserables, i.e., a corresponding set of mutually commuting self-adjoint operators [itex]\{A_1,\ldots,A_n \}[/itex] whose common eigenkets form a complete orthonormalized set of (generalized) eigenvectors [itex]|a_1,\ldots,a_n \rangle[/itex], where the [itex]a_j \in \mathbb{R}[/itex] run over the spectrum of the operators [itex]A_j[/itex]. With generalized I mean that the spectrum of some (or all) of the operators can contain continuous parts (or are entirely continuous).

    Then the wave function, expressed with respect to this basis is defined as
    [tex]\psi(a_1,\ldots,a_n)=\langle a_1,\ldots,a_n|\psi \rangle.[/tex]
    The physical meaning of this wave function is given by Born's rule, i.e., [itex]|\psi(a_1,\ldots,a_n)|^2[/itex] is the probability (distribution) for measuring common values [itex](a_1,\ldots,a_n)[/itex] when measuring the observables represented by the self-adjoint operators [itex]A_1,\ldots,A_n[/itex].

    The action of an arbitrary operator [itex]B[/itex] on the wave function is given by
    [tex]\hat{B} \psi(a_1,\ldots,a_n)=\langle a_1,\ldots,a_n|B \psi \rangle.[/tex]
    Note that [itex]\hat{B}[/itex] is something different than the abstract operator [itex]B[/itex]. The former is an abstract operator defined only indirectly by some algebra of observables, while [itex]\hat{B}[/itex] is the representation of the operator in the concrete realization on a function Hilbert space of wave functions.

    As an example take the usual case of a single spinless particle, whose observable algebra is spanned by the position and momentum operators [itex]x_j[/itex] and [itex]p_j[/itex] ([itex]j \in \{1,2,3\}[/itex] denoting the representants of the components of the corresponding vectors wrt. a Cartesian basis, obeying the Heisenberg-algebra commutation relations
    [tex][x_j,x_k]=[p_j,p_k]=0, \quad [x_j,p_k]=\mathrm{i} \hbar \delta_{jk}.[/tex]
    As a complete common set of observables we can choose the three position-vector components of the particle, each of which has the entire real axis as its spectrum. The corresponding generalized eigenvectors are by definition normalized as
    [tex]\langle \vec{x}|\vec{y} \rangle=\delta^{(3)}(\vec{x}-\vec{y}).[/tex]
    Then the wave function is
    [tex]\psi(\vec{x})=\langle \vec{x}|\psi \rangle.[/tex]
    Since [itex]|\vec{x} \rangle[/itex] is a common eigenvector of the three position-vector components, we have
    [tex]\hat{x}_j \psi(\vec{x}):=\langle \vec{x}|\mathbf{x}_j \psi \rangle=\langle \mathbf{x}_j \vec{x}|\psi \rangle=x_j \langle \vec{x}|\psi \rangle=x_j \psi(x).[/tex]
    Here, I've written the abstract position vector in bold letters to distinguish it from the (generalized) eigenvalues [itex]x_j[/itex].

    For the momentum operator one has to use the commutator relations, which say nothing else than that the momentum operators are the generators of spatial translations to show that
    [tex]\hat{p}_j \psi(x)=-\frac{\mathrm{i}}{\hbar} \frac{\partial}{\partial x_j} \psi(x).[/tex]
     
  7. May 11, 2014 #6

    Jano L.

    User Avatar
    Gold Member

    Yes, there are very different approaches and each has something to give. They are so different that it can be confusing to try to use them both at the same time. One can stick with wave functions  ##\psi## most of the time in non-relativistic wave mechanics, when dealing with atoms and molecules. This is how Landau and John Slater do it in their books. They are closer to standard mathematics one knows from the field of differential equations. They are more concrete objects - their modulus squared directly gives probability density (according to Born's rule for ##\psi(\mathbf r)##) and they can be used to calculate values of matrix elements of electric moment or other quantities. Typical calculations of spectrum for given Hamiltonian are easier with wave functions than with abstract operators or matrices.

    How can operator act on the wave function ##\psi##? Simply - it produces another function. For example, take the operator for momentum along ##x## axis. The result of the operation is another function
    $$
    (\hat{p}_x \psi)(x) = \frac{\hbar}{i} \frac{\partial \psi(x)}{\partial x}.
    $$
    Kets look nice and cool but they do not give any practical advantage for solving problems in non-relativistic theory as far as I know. Things are different in the quantum theory of radiation - the thing is based on the ket formalism. But my advice is to understand what is going on with wave functions and only then delve into intricacies of field and ket formalism - there are quite a few (generalized eigenkets, so-called rigged Hilbert space etc...)
     
  8. May 11, 2014 #7
    Thank you for your answers.

    I think I'll follow bhobba's suggestion and get as much as I can from Ballentine and Sakurai, and then go back to the simplified Griffiths version, as that is mostly the approach my university courses follow.
     
  9. May 11, 2014 #8

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    @Jano L. You are right to some extent: When it comes to practical calculations in non-relativistic QT the wave-function approach is the most useful, but it's not the most simple conceptual basis to learn about QT and often to set up a problem, which you might solve at the end using the well-established methods of partial differential equations for the Schrödinger or Pauli equations, it's much easier to use the abstract formalism.

    I find that the best book for advanced beginners is Sakurai, Modern Quantum Mechanics. It doesn't start with wave functions but from the real foundations and with the most simple systems (like spin 1/2 which only needs a two-dimensional complex Hilbert space).

    For the understanding of the interpretational problems and for some mathematical subtleties (as is the fact that physicists nowadays do not use the original Hilbert-space approach but the socalled rigged Hilbert space) I'd recommend Ballentine. Also very good as an advanced text is Weinberg, Lectures on Quantum Mechanics.

    For a lot of wave mechanical applications you can use the classical texts like Landau and Lifshits (vol. 3 of the theoretical physics course) or Messiah. Also Pauli's classical text and also the quantum-mechanics volume of his lectures on theoretical physics is marvelous.

    I don't know Griffiths's book well enough to say anything about it. Given his excellent electrodynamics book, I guess it's pretty good too.
     
  10. May 11, 2014 #9

    Jano L.

    User Avatar
    Gold Member

    Could you give some example ?
     
  11. May 11, 2014 #10

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    E.g., to derive the general equations for time-independent or time-dependent perturbation theory is much more clear than with the wave-mechanics approach, but that's of course subjective. I forgot to mention that sometimes also the third approache, Feynman's path-integral formulation, has its merits. For time-dependent perturbation theory I'd even prefer it to the operator method.
     
  12. May 20, 2014 #11
    Is it correct then to operate as follows?
    [itex]\int dx [\hat{P}ψ(x)]^*[\hat{P}ψ(x)] = \int dx \hat{P} ψ(x)^* \hat{P} ψ(x) = \int dx\:ψ(x)^*\hat{P}^2ψ(x) [/itex]
    (assuming P is hermitian)
     
    Last edited: May 20, 2014
  13. May 20, 2014 #12

    Jano L.

    User Avatar
    Gold Member

    The middle and the final expressions are incorrect. Correct result is
    $$
    \int dx\, \psi^* \hat{p}^2 \psi,
    $$
    since the operator is hermitian.
     
  14. May 20, 2014 #13
    Thanks, I forgot the ² in the final expression. Why is the middle one wrong, though?
     
  15. May 20, 2014 #14

    jtbell

    User Avatar

    Staff: Mentor

    Are you trying to find the expectation value of ##\hat P## or of ##\hat P^2##?

    Assuming you're trying to find the expectation value of something, the first and second expressions are both invalid. The third one gives the expectation value of ##\hat P^2##. The expectation value of ##\hat P## would be ##\int dx\:ψ(x)^*\hat{P}ψ(x)##.
     
  16. May 20, 2014 #15
    No, I just have to prove that the first expression is equal to the third one.
     
  17. May 20, 2014 #16

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It's not a bad idea to study the more general type of "state", but this won't help you understand the specific thing you asked about when you started the thread.
     
  18. May 20, 2014 #17

    jtbell

    User Avatar

    Staff: Mentor

    Is ##\hat P## supposed to be the momentum operator (which is usually lower-case ##\hat p##), or is it the parity operator (which is usually upper-case ##\hat P##)? Or is it supposed to be a generic unspecified Hermitian operator?
     
  19. May 20, 2014 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    If we ignore all the issues associated with the fact that position and momentum are unbounded observables, then the main difference between normalizable kets and wavefunctions is simply this:
    A normalizable ket ##|\alpha\rangle## is an element of a Hilbert space. A wavefunction ##\psi## is an element of the Hilbert space of square-integrable complex-valued functions on ##\mathbb R^3##.​
    So the only difference is that in the latter case, we say that the Hilbert space is specifically ##L^2(\mathbb R^3)##, and in the former case, we don't.

    If I remember correctly, Sakurai uses the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle## to define wavefunctions from kets. I remember doing some calculations involving this formula when I took a class based on that book, and at the time I felt like they explained a lot. (I don't feel that way anymore, because now I understand how hard it is to justify these steps). For example, if we already know that ##e^{-ipl}## translates states a distance ##l## (i.e. ##e^{-ipl}|x\rangle=|x+l\rangle##), then we can do stuff like this:
    $$\psi_\alpha(x-l) =\langle x-l|\alpha\rangle=\langle x|e^{-ipl}|\alpha\rangle$$
    If we divide both sides by ##-l## and take the limit ##l\to 0##, we get
    $$\frac{d}{dx}\psi(x)=\langle x|ip|\alpha\rangle =i\langle x|p|\alpha\rangle.$$
    In texts at the level of Sakurai, this is justified by using Taylor's formula to rewrite the left-hand side, and the power series definition of the exponential function to rewrite the right-hand side. The problem with that is that the power series definition only works when the exponent is a bounded operator. Anyway, if we still trust the result, we have "derived" that the p operator on the space of kets corresponds to ##-i\frac{d}{dx}## on the space of wavefunctions defined through the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle##.

    I'm not sure what to think of calculations like this, where we just pretend that the mathematical difficulties don't exist.

    So what if we don't ignore the fact that position and momentum are unbounded? For starters we can no longer say that the kets are elements of a Hilbert space. There's still a Hilbert space associated with the theory, but kets aren't elements of it. The proper way to define them involves choosing an appropriate subspace ##\Omega## of the Hilbert space, and then define bras as continuous linear functionals on ##\Omega## and kets as continuous antilinear functionals on ##\Omega##. The notation ##\langle x|\alpha\rangle## shouldn't be interpreted as an inner product. It should be interpreted as ##\langle x|\big(|\alpha\rangle\big)##, i.e. the value of ##\langle x|## at ##|\alpha\rangle##, where ##|\alpha## is an element of ##\Omega##. Now I think the formula ##\psi_\alpha(x)=\langle x|\alpha\rangle## actually makes sense, and I think the steps of the calculation above can be justified too. It's pretty far from trivial though.
     
  20. May 20, 2014 #19

    bhobba

    Staff: Mentor

    Of course they are, as far as satisfying any kind of reasonable mathematical rigour is concerned, a load of rubbish.

    That's why we move to the Rigged Hilbert space formalism, but mathematically its quite difficult. Some books do tackle these difficult issues eg:
    https://www.amazon.com/Quantum-Theory-Mathematicians-Graduate-Mathematics/dp/146147115X

    And of course Ballentine gives a brief overview.

    But I have to say, and I fell into this trap so know how seductive it is, its not really germane to the physics. Its like analysis vs calculus. Calculus is just fine for doing physics, the deeper mathematical issues such as convergence, least upper bound axiom, completeness of the reals etc, that is the domain of analysis, isn't really that important. If you want to call yourself a mathematician you need to do your epsilonics - but physics isn't quite that worried. And you do run into issues with such a naive approach, but in practice its usually (not always but usually) OK.

    Handwavey math works pretty well, especially to start with. You can delve into the technical details of mathematical rigour as mood and interest grabs you.

    Personally when dealing with fundamental issues in QM, I consider everything a finite dimensional vector space - they are the physically realizable states - that is extended by the Rigged Hilbert space formalism to include linear functionals purely for mathematical convenience so we can take derivatives etc and our calculations throw up things like waves that extend to infinity - obviously that is just an idealisation - but in certain situations is a nice way to model things. Dirac delta functions don't exist either - but are nice for modelling.

    Just don't expect rigour unless you are willing to delve into some rather hairy math as a quick browse through say Gelflands tome on distribution theory and Rigged Hilbert spaces readily attest to.

    Added Later:
    BTW Rigged Hilbert spaces are not the only way to rigerously develop this stuff - non standard anaysis also has a take eg:
    http://projecteuclid.org/download/pdf_1/euclid.pja/1195523279

    But if you think Rigged Hilbert spaces hairy, take my word for it non standard analsis is a whole new ball game - groan. I know - I was into it at one time and worked through the details. Without those details it's not to bad though and we even have some elementary textbooks that take that approach:
    https://www.amazon.com/Elementary-Calculus-An-Infinitesimal-Approach/dp/0871509113

    But if you want rigour without the dreaded - it can be shown - groan - you are in for a whole world of hurt.

    Thanks
    Bill
     
    Last edited by a moderator: May 6, 2017
  21. May 20, 2014 #20

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Do you really mean finite dimensional, or do you include countably infinite dimensional spaces? If you stay with finite dimensional vector spaces, I don't know how you possibly could express any sort of position representation. But in countably infinite spaces, you could work with e.g. the Hermite polynomials, which would give you a sort of (non-intuitive) position representation.

    I cannot imagine how you would accomplish such a task with a finite number of elements.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Wavefunctions and eigenkets
  1. The Wavefunction (Replies: 5)

  2. On wavefunctions (Replies: 2)

Loading...