Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Definition of a wavefunction as a function of position.

  1. May 25, 2013 #1
    Hello i wonder if anyone here can explain to me, why the wavefunction as a function of position is defined like this:

    $$\Psi(x,t) = \langle{x}|{\Psi(t)}\rangle$$

    Why is it wise to use an inner product as a definition?
  2. jcsd
  3. May 25, 2013 #2
    A quantum state is an abstract object: a vector in (in general, infinite dimensional) Hilbert space. A vector can be described a bit more concretely by its components with respect to a particular basis, e.g.:
    ##|\psi\rangle = \sum_i c_i |\phi_i\rangle##
    where ##\{|\phi_i\rangle\}## is a basis for the Hilbert space. If its an orthonormal basis, it has the advantage that a vector's components can be very easily recovered by an inner product: ##c_i = \langle \phi_i|\psi\rangle##. Hence, the wavefunction is just the projection of the quantum state onto the position basis, ##\{|x\rangle\}## (or, more generally, ##\{|\vec{r}\rangle\}##). Note the position basis isn't always an appropriate choice, such as when considering two-level quantum systems defined by, e.g., spin states.

    In quantum mechanics, we also have the Born rule which says, roughly, that if we have a state expressed in an orthonormal basis and a measurement that allows us to determine which basis state the system is in, then the probability that the system will be in state ##|\phi_i\rangle## is ##|c_i|^2##. Thus, the squared magnitude of a particle's wavefunction has the usual interpretation as the probability density that the particle will be found at a particular place if its location is measured.
  4. May 25, 2013 #3
    [itex]\left|ψ\right\rangle[/itex] is called the state vector for a system, and to understand it's meaning, we need to project it on a set of basis states.

    Just like [itex]\vec{A}[/itex] is a vector, and we give it meaning by projecting it onto a complete basis (using an inner or "dot" product) like real space in cartesian coordinates, like so:

    [itex]\vec{A} = \vec{A}\cdot\hat{r} \rightarrow \langle{\hat{r}}\left|A\right\rangle[/itex]

    Where [itex]\hat{r} = \hat{x} + \hat{y} + \hat{x} \rightarrow \left|x\right\rangle + \left|y\right\rangle + \left|z\right\rangle[/itex]

    so now you'll calculate the inner product of [itex]\vec{A}[/itex] with each of the three basis vector and obtain the "components" of A along each direction: [itex]\langle{x}\left|A\right\rangle[/itex], [itex]\langle{y}\left|A\right\rangle[/itex], and [itex]\langle{z}\left|A\right\rangle[/itex]. Now you can understand the meaning of [itex]\vec{A}[/itex] because it has been projected onto a basis.

    The state vector gives probabilities amplitudes. Probabilities of what? You need to project it onto a complete basis to give it meaning. A two state system like the spin of a spin 1/2 particle is easy to understand. It can be "spin up" or "spin down" let's take the z-axis as reference for up or down, and our complete basis is formed by [itex]\left|+z\right\rangle[/itex] and [itex]\left|-z\right\rangle[/itex], meaning [itex]\langle{+z}\left|ψ\right\rangle[/itex] is the probability amplitude the particle is spin up and [itex]\langle{-z}\left|ψ\right\rangle[/itex] is for spin down. (take magnitude for actual probabilities.)

    Now if our state vector is one that wants to decribe the postion of a particle, by giving an amplitude for it to be at any position [itex]x[/itex] we need to project it onto a basis, and give it a probability amplitude to be observed at any [itex]x[/itex]. How many [itex]x[/itex] basis vectors are there? Infinity, since [itex]x[/itex] continuous! We take this in stride however and want to calculate the probability amplitudes [itex]\langle{x}\left|ψ\right\rangle[/itex] which is just the inner product between two infinite dimensional vectors.

    All that background just to say since the amplitude [itex]\langle{x}\left|ψ\right\rangle[/itex] has different values for different values of [itex]x[/itex], it's just a function of [itex]x[/itex]... the wavefunction, and we can just denote it as [itex]\langle{x}\left|ψ\right\rangle \equiv ψ(x)[/itex].
  5. May 25, 2013 #4


    User Avatar
    Science Advisor
    Homework Helper

    The correct definition would be

    [tex] \Psi \left(\vec{x},t\right) = \langle \vec{x},t|\Psi \rangle [/tex]

    and the full mathematical explanation is found in the theory of the representations of the Galilei group on a rigged Hilbert space.
  6. May 25, 2013 #5
    Attached is an excerpt from Townsend's excellent quantum mechanics book that explains how wavefunctions and the Schrodinger equation emerge rather naturally from the bra-ket machinery.

    Attached Files:

  7. May 26, 2013 #6
    Thank you guys. Your assistance has always been of a great help to me.
  8. May 26, 2013 #7
    So lets say i have a position basis. If i now use an operator ##\hat{p}## on this basis i will get basis which coresponds to a momentum space and projections of a ##|\Psi(t)\rangle## on these base vectors will now be ##\Psi (p,t)##?

    In other words. Do operators transform basis or a state vector or both?
  9. May 26, 2013 #8
    No, that's not what QM operators do. (In the following, I will be assuming for simplicity, as I did in my first post, that the wavefunction is only spatially one dimensional, like the square well.)

    There are two kinds of operators we use in QM: Hermitian and unitary. Hermitian operators are associated with measurements. That doesn't mean doing a measurement is the same as applying the operator. It just means that when we say ##\hat{P}## is the momentum operator, then its eigenvalues are the spectrum of possible momentum measurements. It's not a change of basis operator.

    Unitary operators, on the other hand, are often physically realizable, and it does make sense to apply them to generic states like you're trying to do. Mathematically, they're generated by anti-Hermitian operators (which we can make by multiplying a Hermitian operator by ##i## and some arbitrary real constant) via an exponential map. The momentum operator generates translations in space: ##T(a) = e^{-ia\hat{P}/\hbar}, T(a)|x\rangle = |x+a\rangle## (a passive translation of the position basis to the right). In terms of wavefunctions, this means ##\langle x|T(a)|\Psi(t)\rangle = \Psi(x-a,t)## (an active translation of the wavefunction to the right). In the same manner, the Hamiltonian (the operator corresponding to the observable energy) generates time evolution of states and angular momentum operators generate rotation.
    Last edited: May 26, 2013
  10. May 26, 2013 #9
    Attached is an extension of the previous Townsend excerpt, which explains momentum space wavefunctions and the relationship between the position basis and the momentum basis.

    Attached Files:

  11. May 26, 2013 #10
    Ok so if i understood right the inner product is a projection of a state vector's norm ##|\,|\psi(t) \rangle\,|## to a normalised vector ##|x\rangle##. So this means that whatever i get is allso a norm!

    This means that:

    $$\langle x | \psi(t)\rangle \neq \psi(x,t)$$

    but rather:

    $$\langle x | \psi(t)\rangle = |\psi(x,t)|$$

    Is this correct? I mean why wouldn't it be a norm?
    Last edited: May 26, 2013
  12. May 26, 2013 #11
    Huh? An inner product does not give you a norm. You can define a norm via an inner product as ##||\psi\rangle| = \sqrt{\langle \psi | \psi \rangle}##. The inner product is not a "projection of a state vector's norm", it's a projection of one vector onto another.

    No, it isn't. I don't really understand why you think it is.
  13. May 26, 2013 #12
    What gave you that idea? <x|ψ> is the inner product of |psi> and |x>.

    I'm not sure what you mean by this.
    No, $$\langle x | \psi(t)\rangle = \psi(x,t)$$ is the correct expression.
    Again, I'm not sure what you mean by this.
  14. May 27, 2013 #13
    Well inner product gives us a number doesn't it (it may be complex or real right)? Afterall it is only a generalization of a scalar product and they are very similar:

    \text{scalar product:}& & \vec{a}\cdot \vec{b} &= a_1b_1+1_2b_2+a_3b_3+\dots \xrightarrow{\text{geom. interp.}} \overbrace{|\vec{a}|}^{\rlap{\text{What would happen if this is normalized?}}}\,\underbrace{|\vec{b}|\,\cos{\varphi}}_{\rlap{\text{This is a projection of a norm.}}}\\
    \text{inner product:}& & \left\langle a \right| \left. b\right\rangle &= \overline{a_1}b_1 + \overline{a_2}b_2 + a_3b_3+\dots

    The geometrical interpretation for scalar product is known to me (Wiki) while i don't know the geometrical interpretation of an inner product. But if i look at the scalar product i can say that in fact we project a norm of a vector ##\vec{b}## and not a vector ##\vec{b}##! We then multiply this norm by a norm of ##\vec{a}## denoted ##|\vec{a}|##.

    So what would happen if ##\vec{a}## vas normalised. This means ##|\vec{a}| = 1## and therefore a scalar product equals only the projection of a norm ##\vec{a}\cdot\vec{b} = |\vec{b}|\cos\varphi##. So i thought that this allso applies to an inner product ##\left \langle x\right| \left.\psi(t) \right\rangle##. Afterall an inner product is a number and not vector!
    Last edited: May 27, 2013
  15. May 27, 2013 #14
    No one ever said it gives you a vector. I still don't understand your confusion. The inner product gives you a complex number, as you said. Norms are always greater than or equal to zero, so clearly the inner product is not, by itself, a norm if its value can be complex.
  16. May 27, 2013 #15
    This changes everything indeed. Thank you.
  17. May 27, 2013 #16


    User Avatar
    Science Advisor
    Homework Helper

    Note that all inner products do induce a norm. If [itex]\langle \cdot \mid \cdot \rangle[/itex] is an inner product, then
    [tex]|| \cdot ||: a \mapsto \sqrt{\langle a \mid a \rangle}[/tex]
    defines a norm.

    I guess part of your confusion arises from the fact that the first inner product that you have learned, the scalar product [itex]\vec a \cdot \vec b[/itex] on [itex]\mathbb{R}^n[/itex], actually induces the Euclidean norm [itex]||a||_\mathrm{Eucl} = \sqrt{a_1^2 + a_2^2 + \cdots + a_n^2}[/itex]. Because you already have so much intuition for these things, it is sometimes hard to distinguish between e.g. the inner product and its induced norm.

    However there are many other inner products, the most common ones being
    (a) [itex]\langle \vec a \mid \vec b \rangle = a_1 b_1^* + \cdots + a_n b_n^*[/itex] (with * denoting complex conjugation) on [itex]\mathbb{C}^n[/itex], which reduces to the previously mentioned product if a and b are real.
    (b) [itex]\langle \vec f \mid \vec g \rangle = \int_a^b f(x) g(x)^* \, dx[/itex] on [itex]C(a, b)[/itex], the space of all continuous functions on the interval [a, b].

    Also note that the converse is not necessarily true, e.g.
    [tex]||a||_0 = \max\{ a_1, a_2, \ldots, a_n \}[/tex]
    is a norm on [itex]\mathbb{R}^n[/itex] as well, but there is no inner product [itex]\langle \cdot \mid \cdot \rangle_0[/itex] such that [itex]||a||_0 = \sqrt{\langle a \mid a \rangle_0 }[/itex].
  18. May 28, 2013 #17


    User Avatar
    Science Advisor

    Can you give a reference please?

    Sorry, but I have to comment your signature too:
    Learn how to use mathematics in physics from books written by those who use mathematics in physics! :smile:
  19. May 28, 2013 #18


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    A good reference for the rigged-Hilbert-space approach and the Galilei group is

    L. Ballentine, Quantum Mechanics, Addison-Wesley

    For a mathematically more rigorous treatment see the two-volume book by Galindo/Pascual.

    One should also note that in the bra-ket formalism it is important to tell which picture of time evolution one has in mind. In dextercioby's posting it was obviously the Heisenberg picture, where the operators representing observables take the full time dependence and the state (statistical operator or representing state ket) is time independent. Then the full time dependence is in the (generalized) eigenvector of the representation. In position representation then you have what dextercioby wrote
    [tex]\psi(t,\vec{x})=_{\mathrm{H}}\langle \vec{x},t|\psi \rangle_{\mathrm{H}}.[/tex]
    The wave function is invariant under the choice of the picture of time evolution. In non-relativistic quantum theory one more often sees the Schrödinger picture. Then the state kets carry the complete time dependence and the observable operators and their (generalized) eigenstates are time independent. Then you get
    [tex]\psi(t,\vec{x})=_{\mathrm{S}} \langle \vec{x}|\psi,t \rangle_{\mathrm{S}}.[/tex]
  20. May 28, 2013 #19
    But how do you have a bra like ##\langle \vec{x},t|## to project onto in the Heisenberg picture? That is itself a time dependent state—but all states in Heisenberg are time independent. What am I missing?
  21. May 28, 2013 #20


    User Avatar
    Science Advisor
    Homework Helper

    Hendrik, I didn't have any picture in mind when I wrote that equality. I put the <t> in the bra to make clear that t comes from the Galilei group, or rather a 1-parameter subgroup of it which is the time evolution.
  22. May 28, 2013 #21


    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    @dextercioby: You can use the Galilei group in any picture of time evolution. It has nothing to do with the choice of the picture, since quantum theory is invariant under (time-dependent) unitary transformations, and you always can go from one picture to another by such a unitary transformation.

    @LastOneStanding: A wave function is always the (generalized) scalar product of a (generalized) eigenstate of operators, representing observables and the state ket. In the Heisenberg picture the operators are time dependent and thus also the (generalized) eigenvectors are time dependent, while the state kets are time-independent.
  23. May 28, 2013 #22
    So not all kets are time independent in the Heisenberg picture? Only those that refer to physical states of systems?
  24. May 28, 2013 #23


    User Avatar
    Science Advisor

    Yes. Conversely, not all kets are time dependent in the Schrödinger picture. If the observables are are time-independent, so are their eigenstates. Sakurai distinguishes between state kets and base kets here.
  25. May 30, 2013 #24
    "Definition of a wavefunction as a function of position".

    I remember the comment of a physicist (not famous, but I really should have remembered his/her name): "Coordinate and position are very different things, but often they are not distinguished very well."
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook