Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Confused about wavefunctions and kets

  1. Jul 30, 2014 #1

    dyn

    User Avatar

    Hi. I am confused about the following problems. Any help would be appreciated. Thanks

    1. I don't understand what ψ (r)= <r|ψ> means. What is the difference between the wavefunction ψ and the ket |ψ> ?

    2. A similar equation is ψ (p) = <p|ψ>. Is this ket |ψ> the same as the one above or is it expressed in terms of p instead of r ?

    3. Momentum p=[itex]\hbar[/itex]k ; so if k is quantised as in the infinite square well why is p not quantised ?
     
  2. jcsd
  3. Jul 30, 2014 #2
    Meditate on the formula ##x_i = \hat i \cdot \vec x##. Here ##\vec x## is a 3-vector, ##\hat i## is a unit vector in one of the three spatial directions, and ##x_i## is the component of the vector in the direction of ##\hat i##.

    Similarly, ##\psi(r)## is one component of the Hilbert-space vector ##| \psi \rangle## (namely, the component in the direction of ##| r \rangle##.

    The ket ##| \psi \rangle## is unchanged, but now we are computing its components in a different basis. It's as if I took my 3-vector ##\vec x## and decided to compute its components in a different basis.

    It is! In the infinite square well, only certain discrete values of ##p^2## are possible (I write ##p^2##, because in the infinite square well there are no states of definite ##p##).

    But this is specific to that physical situation; in free space, for instance, neither ##k## nor ##p## is quantized.
     
  4. Jul 30, 2014 #3

    bhobba

    Staff: Mentor

    Thev ket is a vector. The wavefunction is its representation in the position basis.

    The ket is exactly the same - only its representation is changed.

    I suspect momentum is quantised as well, but regardless its just the way the math works out.

    Usually position is quantised because when you look at the path integral formalism some paths reach around and cancels some positions out - that's heuristically of course - the detail is in the math.

    Thanks
    Bill
     
  5. Jul 30, 2014 #4

    dyn

    User Avatar

    Thanks for your replies.
    So , for a free particle position and momentum are continuous. For a bound particle eg infinite well ; momentum is quantised but surely position is still continuous ?
     
  6. Jul 30, 2014 #5

    dyn

    User Avatar

    Going back to the simplest case of an infinite well from x=0 to x=L the wavefunction is given by
    ψ = A sin(n∏x/L). In this case what is the difference between the wavefunction and the ket vector ?
    On a side note the wavefunction exists in a superposition state until observed , but when observed it collapses to a particular value of n. Lets say n=2 which means a probability of zero of finding the particle in the middle of the well. Does this mean the particle is then confined to one half of the well as it cannot pass through the midpoint ?
     
  7. Jul 30, 2014 #6

    Nugatory

    User Avatar

    Staff: Mentor

    No. The particle has no position until it is measured, so you can't be thinking of it as moving around. "Moving" means changing position, so if it doesn't have a position it doesn't make sense to talk about it moving.

    The wave function, written in the position basis, givens you the probability of finding it at a particular location in the well if you measure its position. The particle can appear anywhere that the amplitude of the wave function is non-zero. If the wave function is zero in the middle of the well and non-zero towards the ends, that just tells you that you might find it at ether end but not in the middle.
     
  8. Jul 30, 2014 #7

    dyn

    User Avatar

    But once the position has been measured and the wavefunction collapses to that state does the particle not have to remain in that half of the well ?

    Also what is the difference between the wavefunction and the ket for an infinite well ?
     
  9. Jul 30, 2014 #8

    Nugatory

    User Avatar

    Staff: Mentor

    It does not. The "collapsed" wave function has a sharp peak at one location, but it continues to evolve according to Schrodinger's equation and the peak spreads out and flattens over time.
     
  10. Jul 30, 2014 #9

    dyn

    User Avatar

    Sorry to appear stupid here. maybe you could tell me where i'm going wrong. Before the measurement is made the wavefunction is in a superposition of states sin(n∏x/L) with n running from 1 to ∞ ? When a measurement is made the wavefunction collapses to a single value of n ? Lets say it is n=2 which has a zero probability in the middle. The wavefunction is not a function of time so if the particle is measured in the RHS or LHS and it cannot be at the centre should it not remain there ?
    Sorry again for not getting this.
     
  11. Jul 31, 2014 #10

    Nugatory

    User Avatar

    Staff: Mentor

    It's the other way around.

    The states identified by the number ##n## are energy eigenstates, states in which the energy is precisely fixed. Write one of these states as a function of position and you'll get ##\psi_n(x)=sin(n\pi{x})## and that will give you the amplitude for finding the particle in that position if it has that energy. That's our starting point if we know the particle is in the well with some definite energy. It's clearly not a state of definite position.

    When you measure the position of the particle and find it at position ##x_0##, the wave function collapses to ##\psi(x)=\delta(x-x_0)##. That state is a superposition of all the energy eigenstates, and it is not independent of time (the position operator does not commute with the Hamiltonian) so can and does spread out.
     
  12. Jul 31, 2014 #11

    dyn

    User Avatar

    Thanks for that. My remaining questions are as follows if anyone can help ?

    What is the difference between the wavefunction ψ = Asin(n∏x/L) for the infinite well and the associated ket ?

    My notes say that with position and momentum kets , the corresponding operators have continuous spectra so sums are replaced with integrals but as seen for the infinite well momentum is quantised so is not continuous. So is it only continuous when unbound ? And when bound is it treated as discrete with sums not integrals ?
     
  13. Jul 31, 2014 #12

    Nugatory

    User Avatar

    Staff: Mentor

    Start with the answers from Bhobba and The_Duck above; if they don't work for you see if you can pick out exactly where you're getting lost and we'll be able to clear it up.

    In the most common and important problems, yes, that's pretty much how it works. Position and momentum are continuous in the unbound states so you have to integrate; in the bound states you can sum instead.
     
  14. Jul 31, 2014 #13

    bhobba

    Staff: Mentor

    Its a representation of the ket which is an element of a conceptually infinite dimensional vector space.

    Grab a bit of paper and draw a line with an arrow on it. That is a two dimensional vector. Now project its end point onto the edges of the paper - that is its representation. To tell someone else what the vector is you can send the bit of paper with the line drawn to someone - that would be telling them the actual vector. Or you can tell them its representation - ie the projections onto the sides of the paper and they can reconstruct it from those two numbers.

    Extending that to an infinite dimensional space you obviously can't send them an infinite dimensional bit of paper with the actual vector - you can only tell them its representation - and since its infinite dimensional you cant send them an infinite set of numbers unless its some mathematically defined sequence, but at least you can get it as accurate as you like by sending a lot of numbers.

    Fortunately in your example above its such you only have to send your formula.

    You can represent a vector in a number of different basis. In your example its continuous in the x basis, where x is used as the label (without detailing exactly what a continuous basis is) and discrete in the basis using n as your label.

    Its a subtle issue with Hilbert spaces that I can see, unless you have studied them, can be a bit confusing.

    But persevere.

    Thanks
    Bill
     
    Last edited: Jul 31, 2014
  15. Jul 31, 2014 #14

    dyn

    User Avatar

    Thanks for persevering with me. I practice with past exam papers and I do fine with them but when I delve deeper and come up with my own questions I get totally confused.
    As regards a 1-D infinite well I have ψ(x) = Asin(n∏x/L) = <x|ψ> so how do I find |ψ> ? It seems that in this case |ψ> is not important but I am trying to understand things starting with the most basic.
     
  16. Jul 31, 2014 #15

    bhobba

    Staff: Mentor

    |ψ> is important - its just because its infinite dimensional you cant, like drawing a line on a bit of paper, know what it is, you can only know representations.

    Have you studied linear algebra yet? Things will likely be a lot clearer after doing that.

    Thanks
    Bill
     
  17. Jul 31, 2014 #16

    dyn

    User Avatar

    I have done some introductory linear algebra eg matrices , determinants , eigenvalues and eigenvectors. I can cope with the exams at my level. Its just that I come up with my own questions and then confuse myself. Thanks for everyones help.
     
  18. Aug 1, 2014 #17

    vanhees71

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Perhaps some general remarks still help. Often this issue is confused in textbooks by overemphasizing the wave-mechanics point of view, and then students think the kets [itex]|\psi \rangle[/itex] are the same as the wave function [itex]\psi(x)[/itex]. This is not true!

    The kets [itex]|\psi \rangle[/itex] are vectors in an abstract Hilbert space. For a single particle in non-relativistic quantum theory this is the separable Hilbert space. It's a well-known theorem from functional analysis that there is only one such Hilbert space, i.e., two realizations of this Hilbert space are always equivalent in the sense of a unitary mapping from one to the other realization.

    E.g., you can work in the position representation. To that end you introduce generalized eigenkets of the position operator [itex]|x \rangle[/itex] which are not Hilbert-space vectors but distributions. The mathematical details are worth to be studied, but are too lengthy to elaborate here. A good source is the book by Ballentine who uses the modern language of rigged Hilbert spaces. If you are interested in even more mathematical rigor the two-volume book by Galindo and Pascual are a good source. Anyway, you have a complete set of generalized position eigenvectors and thus you can represent any vector in the abstract Hilbert space [itex]\mathcal{H}[/itex] by a square-integrable function [itex]\psi(x)=\langle x|\psi \rangle[/itex]. That shows that the abstract Hilbert-space formalism, invented by Dirac, is equivalent to Schrödinger's wave-mechanics formulation. You can go back and forth, using the completeness relation
    [tex]\int_{\mathbb{R}} \mathrm{d} x |x \rangle \langle x|=\hat{1}.[/tex]
    I.e., a given abstract ket [itex]|\psi \rangle \in \mathcal{H}[/itex] is mapped to the wave function
    [tex]\psi(x)=\langle x|\psi \rangle \in \mathrm{L}^2(\mathbb{R}),[/tex]
    where [itex]\mathrm{L}^2(\mathbb{R})[/itex] is the space of square-integrable functions [itex]\mathbb{R} \rightarrow \mathbb{C}[/itex]. On the other hand, if you have given an arbitrary square-integrable [itex]\psi(x)[/itex] you come back to the abstract vector by
    [tex]|\psi \rangle=\int_{\mathbb{R}} \mathrm{d} x |x \rangle \langle x|\psi \rangle=\int_{\mathbb{R}} \mathrm{d} x |x \rangle \psi(x).[/tex]
    Here, we have used the above stated completeness relation.

    Now, the Hilbert space is separable. This means there is a discrete basis of orthogonal vectors. An example are the energy eigenvectors of the harmonic oscillator, which I call [itex]|n \rangle[/itex], where [itex]n \in \mathbb{N}_0=\{0,1,2,3,\ldots \}[/itex]. The energy eigenvalues are [itex]E_n=\hbar \omega (n+1/2)[/itex]. Also this basis obeys a completeness relation,
    [tex]\sum_{n=0}^{\infty} |n \rangle \langle n|=\hat{1}.[/tex]
    Thus you can map any state ket [itex]|\psi \rangle[/itex] to a square-summable sequences. This realizes the separable Hilbert space as the Hilbert space [itex]\ell^2[/itex] of such square-summable sequences. You just map [itex]|\psi \rangle[/itex] to the sequence defined by
    [tex]\psi_n=\langle n | \psi \rangle,[/tex]
    and if you have given a sequence [itex](\psi_n) \in \ell^2[/itex] you get back the abstract ket via the completeness relation
    [tex]|\psi \rangle=\sum_{n=0}^{\infty} |n \rangle \langle n|\psi \rangle=\sum_{n=0}^{\infty} |n \rangle \psi(n).[/tex]
    The formulation of quantum mechanics in the Hilbert space of square-summable sequences, [itex]\ell^2[/itex] is known as "matrix mechanics", worked out by Born, Jordan, and Heisenberg even before wave mechanics.

    Of course you can also directly map from the wave-mechanics to the matrix-mechanics realization using the appropriate completeness relations. Suppose you've given a sequence [itex](\psi_n) \in \ell^2[/itex]. Then you have
    [tex]\psi(x) = \langle x|\psi \rangle=\sum_{n=0}^{\infty} \langle x|n \rangle \langle n|\psi \rangle=\sum_{n=0}^{\infty} u_n(x) \psi_n.[/tex]
    Here, [itex]u_n(x)=\langle x|n \rangle[/itex] is the solution of the time-independent Schrödinger equation (eigenstates of the Hamiltonian) of the harmonic oscillator. Of course, you can also give the corresponding back transformation from the wave function to the sequence. This provides a unitary mapping between [itex]\ell^2[/itex] and [itex]\mathrm{L}^2(\mathbb{R})[/itex].

    I hope, know the issue is a bit cleare.
     
  19. Aug 1, 2014 #18

    dyn

    User Avatar

    It seems that the wavefunction ψ(r) is the ultimate goal because if we take its modulus squared it gives the probability of finding a particle at different positions. Is the ket vector just a tool for calculations or does it have a more fundamental meaning ?
     
  20. Aug 1, 2014 #19

    Avodyne

    User Avatar
    Science Advisor

    The ket has a more fundamental meaning. From it, we get not only ##\psi(x)=\langle x|\psi\rangle##, whose absolute square gives us the probability density to find the particle near ##x##, but also ##\tilde\psi(p)=\langle p |\psi\rangle##, whose absolute square gives us the probability density to find the particle with momentum near ##p##, and also ##\psi_n=\langle n|\psi\rangle## (where ##|n\rangle## is the ##n##th energy eigenstate), whose absolute square gives us the probability to find the particle with energy ##E_n##. (Here I have assumed discrete energy eigenstates.) All of these different probabilities (and many more, for any observable at all) are implicitly contained in the ket vector ##|\psi\rangle##.

    However, we still have to specify ##|\psi\rangle## in some form. To do so, we need to know one of its representations, ##\psi(x)## or ##\tilde\psi(p)## or ##\psi_n## or some other, explcitly.
     
  21. Aug 1, 2014 #20

    dyn

    User Avatar

    Isn't one of the postulates of QM that the wavefunction ψ(r ,t) contains all of the information we could possibly know about a particle ? And ψ(p , t) can be obtained by a Fourier transform of ψ(r , t). So we can find everything we want to know without using kets ?
     
  22. Aug 1, 2014 #21

    atyy

    User Avatar
    Science Advisor

    Yes, the wave function contains everything, and the Fourier transform gives the same information in a different way. Because the information is the same whether we use ψ(x) or ψ(p), we say that the are different representations of the same object - the ket or vector |ψ>.

    If we have a vector, we can write it as v = a1e1 +a2e2, or as a column vector (a1 a2)T. But if we choose different basis vectors, then the same vector can be written as v = b1f1 +b2f2, or as a column vector (b1 b2)T.

    Just as v is a vector, the ket |ψ> is also a vector.
    Just as (a1 a2)T and (b1 b2)T are different coordinate representations of the vector v in different bases, the wave functions ψ(x) and ψ(p) are also the coordinates of the ket |ψ> in different bases.

    To extract the coordinates given an orthonormal basis eg. a1, one takes the scalar product a1 = (e1,v).
    That is exactly the same as writing ψ(x) = <x|ψ>. The only notational difference is that x is a continuous index, whereas the coordinates and basis vectors for the vector v had a discrete index which ran from 1 to 2.
    (There are subtleties, but we can leave that to the mathematicians for the time being.)
     
    Last edited: Aug 1, 2014
  23. Aug 1, 2014 #22

    Nugatory

    User Avatar

    Staff: Mentor

    In principle, yes, but for many problems that's the hard way of doing it. Choosing to attack a problem using a representation in which ##\psi## is written as a function of position and time is like choosing to solve a problem in Cartesian coordinates - it may or may not be the easiest approach. Before committing yourself, you'd look at the problem and decide whether you want to write your vectors in terms of x, y, and z components, or r, ##\theta##, ##\phi## components, or something else altogether.

    For an example of a problem that is more easily solved using the abstract bra-ket notation than by using a function of x and t, look at the quantum mechanical harmonic oscillator.
     
  24. Aug 1, 2014 #23

    bhobba

    Staff: Mentor

    No.

    If you want to see a proper axiomatic treatment get a hold of Ballentine - Quantum Mechanics - A Modern Development and read the first three chapters.

    The math is probably above your level tight now, but you will likely get the gist and can return to it later when your math is a bit more adavnced.

    He also carefully explains the issue you are discussing here, by which I mean its trivial when you understand the basic principles. For example a quantum state isn't really an element of a vector space, its actually an operator.

    Thanks
    Bill
     
  25. Aug 1, 2014 #24

    dyn

    User Avatar

    I will try and get hold of the Ballentine book. But for now I've got conflicting answers. The wave function contains everything ! and it doesn't ! I have looked in several books. Some say the wavefunction contains everything we need to know and some say it is the state vector that contains everything. But they are both derivable from each other. So do they both contain everything we need to know ?
     
  26. Aug 1, 2014 #25

    atyy

    User Avatar
    Science Advisor

    Yes, since the wave function is the representation of the state vector in the position basis, they are equivalent.

    The state vector is generally preferred, because there are systems such as a single spin on a lattice, in which position is not a degree of freedom, so there is no position basis. However, usage is loose enough that if you say the wave function of a single spin on a lattice, people will generally know you mean the state vector of that system, or the state vector in some basis that is appropriate for degrees of freedom of that system.
     
    Last edited: Aug 1, 2014
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook