Confused about wavefunctions and kets

In summary, the conversation discusses the concepts of wavefunction and ket vector in quantum mechanics, and their relationship to position and momentum. It is explained that the ket vector is a representation of the wavefunction in a particular basis, and that in the case of a free particle, both position and momentum are continuous. However, for a bound particle in an infinite well, momentum is quantized and position is still continuous. The concept of the wavefunction collapsing to a single state upon measurement is also discussed, with the understanding that the particle's position is not defined until it is measured.
  • #1
dyn
773
61
Hi. I am confused about the following problems. Any help would be appreciated. Thanks

1. I don't understand what ψ (r)= <r|ψ> means. What is the difference between the wavefunction ψ and the ket |ψ> ?

2. A similar equation is ψ (p) = <p|ψ>. Is this ket |ψ> the same as the one above or is it expressed in terms of p instead of r ?

3. Momentum p=[itex]\hbar[/itex]k ; so if k is quantised as in the infinite square well why is p not quantised ?
 
Physics news on Phys.org
  • #2
dyn said:
1. I don't understand what ψ (r)= <r|ψ> means. What is the difference between the wavefunction ψ and the ket |ψ> ?

Meditate on the formula ##x_i = \hat i \cdot \vec x##. Here ##\vec x## is a 3-vector, ##\hat i## is a unit vector in one of the three spatial directions, and ##x_i## is the component of the vector in the direction of ##\hat i##.

Similarly, ##\psi(r)## is one component of the Hilbert-space vector ##| \psi \rangle## (namely, the component in the direction of ##| r \rangle##.

dyn said:
2. A similar equation is ψ (p) = <p|ψ>. Is this ket |ψ> the same as the one above or is it expressed in terms of p instead of r ?

The ket ##| \psi \rangle## is unchanged, but now we are computing its components in a different basis. It's as if I took my 3-vector ##\vec x## and decided to compute its components in a different basis.

dyn said:
3. Momentum p=[itex]\hbar[/itex]k ; so if k is quantised
as in the infinite square well why is p not quantised ?

It is! In the infinite square well, only certain discrete values of ##p^2## are possible (I write ##p^2##, because in the infinite square well there are no states of definite ##p##).

But this is specific to that physical situation; in free space, for instance, neither ##k## nor ##p## is quantized.
 
  • Like
Likes 1 person
  • #3
dyn said:
I don't understand what ψ (r)= <r|ψ> means. What is the difference between the wavefunction ψ and the ket |ψ> ?

Thev ket is a vector. The wavefunction is its representation in the position basis.

dyn said:
2. A similar equation is ψ (p) = <p|ψ>. Is this ket |ψ> the same as the one above or is it expressed in terms of p instead of r ?

The ket is exactly the same - only its representation is changed.

dyn said:
Momentum p=[itex]\hbar[/itex]k ; so if k is quantised as in the infinite square well why is p not quantised ?

I suspect momentum is quantised as well, but regardless its just the way the math works out.

Usually position is quantised because when you look at the path integral formalism some paths reach around and cancels some positions out - that's heuristically of course - the detail is in the math.

Thanks
Bill
 
  • #4
Thanks for your replies.
So , for a free particle position and momentum are continuous. For a bound particle eg infinite well ; momentum is quantised but surely position is still continuous ?
 
  • #5
Going back to the simplest case of an infinite well from x=0 to x=L the wavefunction is given by
ψ = A sin(n∏x/L). In this case what is the difference between the wavefunction and the ket vector ?
On a side note the wavefunction exists in a superposition state until observed , but when observed it collapses to a particular value of n. Let's say n=2 which means a probability of zero of finding the particle in the middle of the well. Does this mean the particle is then confined to one half of the well as it cannot pass through the midpoint ?
 
  • #6
dyn said:
Does this mean the particle is then confined to one half of the well as it cannot pass through the midpoint ?

No. The particle has no position until it is measured, so you can't be thinking of it as moving around. "Moving" means changing position, so if it doesn't have a position it doesn't make sense to talk about it moving.

The wave function, written in the position basis, givens you the probability of finding it at a particular location in the well if you measure its position. The particle can appear anywhere that the amplitude of the wave function is non-zero. If the wave function is zero in the middle of the well and non-zero towards the ends, that just tells you that you might find it at ether end but not in the middle.
 
  • #7
Nugatory said:
No. The particle has no position until it is measured, so you can't be thinking of it as moving around. "Moving" means changing position, so if it doesn't have a position it doesn't make sense to talk about it moving.

The wave function, written in the position basis, givens you the probability of finding it at a particular location in the well if you measure its position. The particle can appear anywhere that the amplitude of the wave function is non-zero. If the wave function is zero in the middle of the well and non-zero towards the ends, that just tells you that you might find it at ether end but not in the middle.

But once the position has been measured and the wavefunction collapses to that state does the particle not have to remain in that half of the well ?

Also what is the difference between the wavefunction and the ket for an infinite well ?
 
  • #8
dyn said:
But once the position has been measured and the wavefunction collapses to that state does the particle not have to remain in that half of the well ?

It does not. The "collapsed" wave function has a sharp peak at one location, but it continues to evolve according to Schrodinger's equation and the peak spreads out and flattens over time.
 
  • #9
Sorry to appear stupid here. maybe you could tell me where I'm going wrong. Before the measurement is made the wavefunction is in a superposition of states sin(n∏x/L) with n running from 1 to ∞ ? When a measurement is made the wavefunction collapses to a single value of n ? Let's say it is n=2 which has a zero probability in the middle. The wavefunction is not a function of time so if the particle is measured in the RHS or LHS and it cannot be at the centre should it not remain there ?
Sorry again for not getting this.
 
  • #10
dyn said:
Sorry to appear stupid here. maybe you could tell me where I'm going wrong. Before the measurement is made the wavefunction is in a superposition of states sin(n∏x/L) with n running from 1 to ∞ ? When a measurement is made the wavefunction collapses to a single value of n ? Let's say it is n=2 which has a zero probability in the middle. The wavefunction is not a function of time so if the particle is measured in the RHS or LHS and it cannot be at the centre should it not remain there ?

It's the other way around.

The states identified by the number ##n## are energy eigenstates, states in which the energy is precisely fixed. Write one of these states as a function of position and you'll get ##\psi_n(x)=sin(n\pi{x})## and that will give you the amplitude for finding the particle in that position if it has that energy. That's our starting point if we know the particle is in the well with some definite energy. It's clearly not a state of definite position.

When you measure the position of the particle and find it at position ##x_0##, the wave function collapses to ##\psi(x)=\delta(x-x_0)##. That state is a superposition of all the energy eigenstates, and it is not independent of time (the position operator does not commute with the Hamiltonian) so can and does spread out.
 
  • #11
Thanks for that. My remaining questions are as follows if anyone can help ?

What is the difference between the wavefunction ψ = Asin(n∏x/L) for the infinite well and the associated ket ?

My notes say that with position and momentum kets , the corresponding operators have continuous spectra so sums are replaced with integrals but as seen for the infinite well momentum is quantised so is not continuous. So is it only continuous when unbound ? And when bound is it treated as discrete with sums not integrals ?
 
  • #12
dyn said:
What is the difference between the wavefunction ψ = Asin(n∏x/L) for the infinite well and the associated ket ?
Start with the answers from Bhobba and The_Duck above; if they don't work for you see if you can pick out exactly where you're getting lost and we'll be able to clear it up.

My notes say that with position and momentum kets , the corresponding operators have continuous spectra so sums are replaced with integrals but as seen for the infinite well momentum is quantised so is not continuous. So is it only continuous when unbound ? And when bound is it treated as discrete with sums not integrals ?
In the most common and important problems, yes, that's pretty much how it works. Position and momentum are continuous in the unbound states so you have to integrate; in the bound states you can sum instead.
 
  • #13
dyn said:
What is the difference between the wavefunction ψ = Asin(n∏x/L) for the infinite well and the associated ket ?

Its a representation of the ket which is an element of a conceptually infinite dimensional vector space.

Grab a bit of paper and draw a line with an arrow on it. That is a two dimensional vector. Now project its end point onto the edges of the paper - that is its representation. To tell someone else what the vector is you can send the bit of paper with the line drawn to someone - that would be telling them the actual vector. Or you can tell them its representation - ie the projections onto the sides of the paper and they can reconstruct it from those two numbers.

Extending that to an infinite dimensional space you obviously can't send them an infinite dimensional bit of paper with the actual vector - you can only tell them its representation - and since its infinite dimensional you can't send them an infinite set of numbers unless its some mathematically defined sequence, but at least you can get it as accurate as you like by sending a lot of numbers.

Fortunately in your example above its such you only have to send your formula.

dyn said:
the corresponding operators have continuous spectra so sums are replaced with integrals but as seen for the infinite well momentum is quantised so is not continuous.

You can represent a vector in a number of different basis. In your example its continuous in the x basis, where x is used as the label (without detailing exactly what a continuous basis is) and discrete in the basis using n as your label.

Its a subtle issue with Hilbert spaces that I can see, unless you have studied them, can be a bit confusing.

But persevere.

Thanks
Bill
 
Last edited:
  • #14
Thanks for persevering with me. I practice with past exam papers and I do fine with them but when I delve deeper and come up with my own questions I get totally confused.
As regards a 1-D infinite well I have ψ(x) = Asin(n∏x/L) = <x|ψ> so how do I find |ψ> ? It seems that in this case |ψ> is not important but I am trying to understand things starting with the most basic.
 
  • #15
dyn said:
so how do I find |ψ> ? It seems that in this case |ψ> is not important but I am trying to understand things starting with the most basic.

|ψ> is important - its just because its infinite dimensional you cant, like drawing a line on a bit of paper, know what it is, you can only know representations.

Have you studied linear algebra yet? Things will likely be a lot clearer after doing that.

Thanks
Bill
 
  • #16
I have done some introductory linear algebra eg matrices , determinants , eigenvalues and eigenvectors. I can cope with the exams at my level. Its just that I come up with my own questions and then confuse myself. Thanks for everyones help.
 
  • #17
Perhaps some general remarks still help. Often this issue is confused in textbooks by overemphasizing the wave-mechanics point of view, and then students think the kets [itex]|\psi \rangle[/itex] are the same as the wave function [itex]\psi(x)[/itex]. This is not true!

The kets [itex]|\psi \rangle[/itex] are vectors in an abstract Hilbert space. For a single particle in non-relativistic quantum theory this is the separable Hilbert space. It's a well-known theorem from functional analysis that there is only one such Hilbert space, i.e., two realizations of this Hilbert space are always equivalent in the sense of a unitary mapping from one to the other realization.

E.g., you can work in the position representation. To that end you introduce generalized eigenkets of the position operator [itex]|x \rangle[/itex] which are not Hilbert-space vectors but distributions. The mathematical details are worth to be studied, but are too lengthy to elaborate here. A good source is the book by Ballentine who uses the modern language of rigged Hilbert spaces. If you are interested in even more mathematical rigor the two-volume book by Galindo and Pascual are a good source. Anyway, you have a complete set of generalized position eigenvectors and thus you can represent any vector in the abstract Hilbert space [itex]\mathcal{H}[/itex] by a square-integrable function [itex]\psi(x)=\langle x|\psi \rangle[/itex]. That shows that the abstract Hilbert-space formalism, invented by Dirac, is equivalent to Schrödinger's wave-mechanics formulation. You can go back and forth, using the completeness relation
[tex]\int_{\mathbb{R}} \mathrm{d} x |x \rangle \langle x|=\hat{1}.[/tex]
I.e., a given abstract ket [itex]|\psi \rangle \in \mathcal{H}[/itex] is mapped to the wave function
[tex]\psi(x)=\langle x|\psi \rangle \in \mathrm{L}^2(\mathbb{R}),[/tex]
where [itex]\mathrm{L}^2(\mathbb{R})[/itex] is the space of square-integrable functions [itex]\mathbb{R} \rightarrow \mathbb{C}[/itex]. On the other hand, if you have given an arbitrary square-integrable [itex]\psi(x)[/itex] you come back to the abstract vector by
[tex]|\psi \rangle=\int_{\mathbb{R}} \mathrm{d} x |x \rangle \langle x|\psi \rangle=\int_{\mathbb{R}} \mathrm{d} x |x \rangle \psi(x).[/tex]
Here, we have used the above stated completeness relation.

Now, the Hilbert space is separable. This means there is a discrete basis of orthogonal vectors. An example are the energy eigenvectors of the harmonic oscillator, which I call [itex]|n \rangle[/itex], where [itex]n \in \mathbb{N}_0=\{0,1,2,3,\ldots \}[/itex]. The energy eigenvalues are [itex]E_n=\hbar \omega (n+1/2)[/itex]. Also this basis obeys a completeness relation,
[tex]\sum_{n=0}^{\infty} |n \rangle \langle n|=\hat{1}.[/tex]
Thus you can map any state ket [itex]|\psi \rangle[/itex] to a square-summable sequences. This realizes the separable Hilbert space as the Hilbert space [itex]\ell^2[/itex] of such square-summable sequences. You just map [itex]|\psi \rangle[/itex] to the sequence defined by
[tex]\psi_n=\langle n | \psi \rangle,[/tex]
and if you have given a sequence [itex](\psi_n) \in \ell^2[/itex] you get back the abstract ket via the completeness relation
[tex]|\psi \rangle=\sum_{n=0}^{\infty} |n \rangle \langle n|\psi \rangle=\sum_{n=0}^{\infty} |n \rangle \psi(n).[/tex]
The formulation of quantum mechanics in the Hilbert space of square-summable sequences, [itex]\ell^2[/itex] is known as "matrix mechanics", worked out by Born, Jordan, and Heisenberg even before wave mechanics.

Of course you can also directly map from the wave-mechanics to the matrix-mechanics realization using the appropriate completeness relations. Suppose you've given a sequence [itex](\psi_n) \in \ell^2[/itex]. Then you have
[tex]\psi(x) = \langle x|\psi \rangle=\sum_{n=0}^{\infty} \langle x|n \rangle \langle n|\psi \rangle=\sum_{n=0}^{\infty} u_n(x) \psi_n.[/tex]
Here, [itex]u_n(x)=\langle x|n \rangle[/itex] is the solution of the time-independent Schrödinger equation (eigenstates of the Hamiltonian) of the harmonic oscillator. Of course, you can also give the corresponding back transformation from the wave function to the sequence. This provides a unitary mapping between [itex]\ell^2[/itex] and [itex]\mathrm{L}^2(\mathbb{R})[/itex].

I hope, know the issue is a bit cleare.
 
  • Like
Likes 2 people
  • #18
It seems that the wavefunction ψ(r) is the ultimate goal because if we take its modulus squared it gives the probability of finding a particle at different positions. Is the ket vector just a tool for calculations or does it have a more fundamental meaning ?
 
  • #19
The ket has a more fundamental meaning. From it, we get not only ##\psi(x)=\langle x|\psi\rangle##, whose absolute square gives us the probability density to find the particle near ##x##, but also ##\tilde\psi(p)=\langle p |\psi\rangle##, whose absolute square gives us the probability density to find the particle with momentum near ##p##, and also ##\psi_n=\langle n|\psi\rangle## (where ##|n\rangle## is the ##n##th energy eigenstate), whose absolute square gives us the probability to find the particle with energy ##E_n##. (Here I have assumed discrete energy eigenstates.) All of these different probabilities (and many more, for any observable at all) are implicitly contained in the ket vector ##|\psi\rangle##.

However, we still have to specify ##|\psi\rangle## in some form. To do so, we need to know one of its representations, ##\psi(x)## or ##\tilde\psi(p)## or ##\psi_n## or some other, explcitly.
 
  • #20
Isn't one of the postulates of QM that the wavefunction ψ(r ,t) contains all of the information we could possibly know about a particle ? And ψ(p , t) can be obtained by a Fourier transform of ψ(r , t). So we can find everything we want to know without using kets ?
 
  • #21
dyn said:
Isn't one of the postulates of QM that the wavefunction ψ(r ,t) contains all of the information we could possibly know about a particle ? And ψ(p , t) can be obtained by a Fourier transform of ψ(r , t). So we can find everything we want to know without using kets ?

Yes, the wave function contains everything, and the Fourier transform gives the same information in a different way. Because the information is the same whether we use ψ(x) or ψ(p), we say that the are different representations of the same object - the ket or vector |ψ>.

If we have a vector, we can write it as v = a1e1 +a2e2, or as a column vector (a1 a2)T. But if we choose different basis vectors, then the same vector can be written as v = b1f1 +b2f2, or as a column vector (b1 b2)T.

Just as v is a vector, the ket |ψ> is also a vector.
Just as (a1 a2)T and (b1 b2)T are different coordinate representations of the vector v in different bases, the wave functions ψ(x) and ψ(p) are also the coordinates of the ket |ψ> in different bases.

To extract the coordinates given an orthonormal basis eg. a1, one takes the scalar product a1 = (e1,v).
That is exactly the same as writing ψ(x) = <x|ψ>. The only notational difference is that x is a continuous index, whereas the coordinates and basis vectors for the vector v had a discrete index which ran from 1 to 2.
(There are subtleties, but we can leave that to the mathematicians for the time being.)
 
Last edited:
  • #22
dyn said:
So we can find everything we want to know without using kets?

In principle, yes, but for many problems that's the hard way of doing it. Choosing to attack a problem using a representation in which ##\psi## is written as a function of position and time is like choosing to solve a problem in Cartesian coordinates - it may or may not be the easiest approach. Before committing yourself, you'd look at the problem and decide whether you want to write your vectors in terms of x, y, and z components, or r, ##\theta##, ##\phi## components, or something else altogether.

For an example of a problem that is more easily solved using the abstract bra-ket notation than by using a function of x and t, look at the quantum mechanical harmonic oscillator.
 
  • #23
dyn said:
Isn't one of the postulates of QM that the wavefunction ψ(r ,t) contains all of the information we could possibly know about a particle ?

No.

If you want to see a proper axiomatic treatment get a hold of Ballentine - Quantum Mechanics - A Modern Development and read the first three chapters.

The math is probably above your level tight now, but you will likely get the gist and can return to it later when your math is a bit more adavnced.

He also carefully explains the issue you are discussing here, by which I mean its trivial when you understand the basic principles. For example a quantum state isn't really an element of a vector space, its actually an operator.

Thanks
Bill
 
  • #24
atyy said:
Yes, the wave function contains everything, and the Fourier transform gives the same information in a different way. Because the information is the same whether we use ψ(x) or ψ(p), we say that the are different representations of the same object - the ket or vector |ψ>.

If we have a vector, we can write it as v = a1e1 +a2e2, or as a column vector (a1 a2)T. But if we choose different basis vectors, then the same vector can be written as v = b1f1 +b2f2, or as a column vector (b1 b2)T.

Just as v is a vector, the ket |ψ> is also a vector.
Just as (a1 a2)T and (b1 b2)T are different coordinate representations of the vector v in different bases, the wave functions ψ(x) and ψ(p) are also the coordinates of the ket |ψ> in different bases.

To extract the coordinates given an orthonormal basis eg. a1, one takes the scalar product a1 = (e1,v).
That is exactly the same as writing ψ(x) = <x|ψ>. The only notational difference is that x is a continuous index, whereas the coordinates and basis vectors for the vector v had a discrete index which ran from 1 to 2.
(There are subtleties, but we can leave that to the mathematicians for the time being.)

bhobba said:
No.

If you want to see a proper axiomatic treatment get a hold of Ballentine - Quantum Mechanics - A Modern Development and read the first three chapters.

The math is probably above your level tight now, but you will likely get the gist and can return to it later when your math is a bit more adavnced.

He also carefully explains the issue you are discussing here, by which I mean its trivial when you understand the basic principles. For example a quantum state isn't really an element of a vector space, its actually an operator.

Thanks
Bill

I will try and get hold of the Ballentine book. But for now I've got conflicting answers. The wave function contains everything ! and it doesn't ! I have looked in several books. Some say the wavefunction contains everything we need to know and some say it is the state vector that contains everything. But they are both derivable from each other. So do they both contain everything we need to know ?
 
  • #25
dyn said:
I will try and get hold of the Ballentine book. But for now I've got conflicting answers. The wave function contains everything ! and it doesn't ! I have looked in several books. Some say the wavefunction contains everything we need to know and some say it is the state vector that contains everything. But they are both derivable from each other. So do they both contain everything we need to know ?

Yes, since the wave function is the representation of the state vector in the position basis, they are equivalent.

The state vector is generally preferred, because there are systems such as a single spin on a lattice, in which position is not a degree of freedom, so there is no position basis. However, usage is loose enough that if you say the wave function of a single spin on a lattice, people will generally know you mean the state vector of that system, or the state vector in some basis that is appropriate for degrees of freedom of that system.
 
Last edited:
  • Like
Likes 1 person
  • #26
dyn said:
I will try and get hold of the Ballentine book. But for now I've got conflicting answers. The wave function contains everything ! and it doesn't ! I have looked in several books.

The math is probably a bit beyond what you know right now, but really its the only way to explain what's going on so I will post the full detail of exactly what the state is from what I said in another thread.

Its based on a bit of advanced math called Gleason's theorem that is usually only discussed in advanced treatments, but for me its the best way to understand what the state is.

First we need to define a Positive Operator Value Measure (POVM). A POVM is a set of positive operators Ei ∑ Ei =1 from, for the purposes of QM, an assumed complex vector space.

Elements of POVM's are called effects and its easy to see a positive operator E is an effect iff Trace(E) <= 1.

Now we can state the single foundational axiom QM is based on in the way I look at it which is a bit different than Ballentine who simply states the axioms without a discussion of why they are true - it's interesting it can be reduced to basically just one. Of course there is more to QM than just one axiom - but the rest follow in a natural way.

An observation/measurement with possible outcomes i = 1, 2, 3 ... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.

Its very strange, but is still true, that this is basically all that required for QM. The state, and what it is follows from this.

Only by Ei means regardless of what POVM the Ei belongs to the probability is the same. This is the assumption of non contextuality and is the well known rock bottom essence of Born's rule via Gleason. The other assumption, not explicitly stated, but used, is the strong law of superposition ie in principle any POVM corresponds to an observation/measurement.

I will let f(Ei) be the probability of Ei. Obviously f(I) = 1 since the POVM contains only one element. Since I + 0 = I f(0) = 0.

First additivity of the measure for effects.

Let E1 + E2 = E3 where E1, E2 and E3 are all effects. Then there exists an effect E, E1 + E2 + E = E3 + E = I. Hence f(E1) + f(E2) = f(E3)

Next linearity wrt the rationals - its the usual standard argument from additivity from linear algebra but will repeat it anyway.

f(E) = f(n E/n) = f(E/n + ... + E/n) = n f(E/n) or 1/n f(E) = f(E/n). f(m E/n) = f(E/n + ... E/n) or m/n f(E) = f(m/n E) if m <= n to ensure we are dealing with effects.

Will extend the definition to any positive operator E. If E is a positive operator a n and an effect E1 exists E = n E1 as easily seen by the fact effects are positive operators with trace <= 1. f(E) is defined as nf(E1). To show well defined suppose nE1 = mE2. n/n+m E1 = m/n+m E2. f(n/n+m E1) = f(m/n+m E1). n/n+m f(E1) = m/n+m f(E2) so nf(E1) = mf(E2).

From the definition its easy to see for any positive operators E1, E2 f(E1 + E2) = f(E1) + f(E2). Then similar to effects show for any rational m/n f(m/n E) = m/n f(E).

Now we want to show continuity to show true for real's.

If E1 and E2 are positive operators define E2 < E1 as a positive operator E exists E1 = E2 + E. This means f(E2) <= f(E1). Let r1n be an increasing sequence of rational's whose limit is the irrational number c. Let r2n be a decreasing sequence of rational's whose limit is also c. If E is any positive operator r1nE < cE < r2nE. So r1n f(E) <= f(cE) <= r2n f(E). Thus by the pinching theorem f(cE) = cf(E).

Extending it to any Hermitian operator H.

H can be broken down to H = E1 - E2 where E1 and E2 are positive operators by for example separating the positive and negative eigenvalues of H. Define f(H) = f(E1) - f(E2). To show well defined if E1 - E2 = E3 - E4 then E1 + E4 = E3 + E1. f(E1) + f(E4) = f(E3) + f(E1). f(E1) - f(E2) = f(E3) - f(E4). Actually there was no need to show uniqueness because I could have defined E1 and E2 to be the positive operators from separating the eigenvalues, but what the heck - its not hard to show uniqueness.

Its easy to show linearity wrt to the real's under this extended definition.

Its pretty easy to see the pattern here but just to complete it will extend the definition to any operator O. O can be uniquely decomposed into O = H1 + i H2 where H1 and H2 are Hermitian. f(O) = f(H1) + i f(H2). Again its easy to show linearity wrt to the real's under this new definition then extend it to linearity wrt to complex numbers.

Now the final bit. The hard bit - namely linearity wrt to any operator - has been done by extending the f defined on effects. The well known Von Neumann argument can be used to derive Born's rule. But for completeness will spell out the detail.

First its easy to check <bi|O|bj> = Trace (O |bj><bi|).

O = ∑ <bi|O|bj> |bi><bj| = ∑ Trace (O |bj><bi|) |bi><bj|

Now we use the linearity that the forgoing extensions of f have led to.

f(O) = ∑ Trace (O |bj><bi|) f(|bi><bj|) = Trace (O ∑ f(|bi><bj|)|bj><bi|)

Define P as ∑ f(|bi><bj|)|bj><bi| and we have f(O) = Trace (OP).

P, by definition, is called the state of the quantum system. The following are easily seen. Since f(I) = 1, Trace (P) = 1. Thus P has unit trace. f(|u><u|) is a positive number >= 0 since |u><u| is an effect. Thus Trace (|u><u| P) = <u|P|u> >= 0 so P is positive.

Hence a positive operator of unit trace P, the state of the system, exists such that the probability of Ei occurring in the POVM E1, E2 ... is Trace (Ei P).

To derive Ballentine's two axioms we need to define what is called a resolution of the identity which is POVM that is disjoint. Such are called Von Neumann observations. We know from the Spectral theorem Hermitian operators, H, can be uniquely decomposed into resolutions of the idenity H = ∑ yi Ei. So what we do is given any observation based on a resolution of the identity Ei we can associate a real number yi with each outcome and uniquely define a Hermitian operator O = ∑ yi Ei, called the observable of the observation.

This gives the first axiom found in Ballentine - but the wording I will use will be slightly different because of the way I have presented it which is different to Ballentine - eg he doesn't point out he is talking about Von Neumann measurements, but measurements in general are wider than that, although all measurements can be reduced to Von Neumann measurements by considering a probe interacting with a system - but that is another story.

Axiom 1
Associated with each Von Neumann measurement we can find a Hermitian operator O, called the observations observable such that the possible outcomes of the observation are its eigenvalues yi.

Axiiom 2 - called the Born Rule
Associated with any system is a positive operator of unit trace, P, called the state of the system, such that expected value of of the outcomes of the observation is Trace (PO).

Axiom 2 is easy to see from what I wrote previously E(O) = ∑yi probability (Ei) = ∑yi Trace (PEi) = Trace (PO).

Now using these two axioms Ballentine develops all of QM.

A word of caution however. Other axioms are introduced as you go - but they occur in a natural way. Schroedinger's equation is developed from probabilities being invariant between frames ie the Principle Of Relativity. That the state after a filtering type observation is an eigenvalue of the observable is a consequence of continuity.

From this we see the state is simply a mathematical requirement that helps in calculating the probability of the outcomes of observations.

Now for the answer to your question.

The state (or wavefunction which is simply the representation of the state in the position basis) contains everything to calculate the probabilities of the outcomes of observations. Now for the subtle point - its a matter of opinion and interpretation if the outcomes of observations and their probability is all that going on.

Thanks
Bill
 
Last edited:
  • #27
Thanks for that. I hope to be able to understand all of it one day. I now know the difference between the wavefunction and the ket. But what is a ket ? An infinite dimensional column vector of what ? Infinite in what respect ? We can find <r|ψ> and <p|ψ> so <r| and <p| must both be infinite dimensional row vectors ?
 
  • #28
dyn said:
Thanks for that. I hope to be able to understand all of it one day. I now know the difference between the wavefunction and the ket. But what is a ket ? An infinite dimensional column vector of what ? Infinite in what respect ? We can find <r|ψ> and <p|ψ> so <r| and <p| must both be infinite dimensional row vectors ?

If you haven't yet done so, dig up the mathematical definition of a "vector space"
 
  • #29
I looked it up in Shankar. My understanding is that the range of r or p is "chopped" up into n segments and then n→∞. But what are the infinite elements of the ket |ψ> ? They must be independent of any particular basis eg. position or momentum so I can't picture what they are ?
 
  • #30
dyn said:
Thanks for that. I hope to be able to understand all of it one day. I now know the difference between the wavefunction and the ket. But what is a ket ? An infinite dimensional column vector of what ? Infinite in what respect ? We can find <r|ψ> and <p|ψ> so <r| and <p| must both be infinite dimensional row vectors ?

Roughly, a ket can be represented as a column vector. So let's say there are only 2 possible positions. Also let us choose to represent the ket |x=1> as the column vector [1 0]T, and the ket |x=2> as the column vector [0 1]T, ie. we choose as basis vectors states of definite position. An arbitary ket is then |ψ>=ψ(1)|x=1>+ψ(2)|x=2>, or equivalently as the wavefunction [ψ(1) ψ(2)]T, or equivalently the wavefunction ψ(x) where x is an index that runs from 1 to 2.

However, x actually is not discrete with only 2 values, it runs continuously. So if we use basis vectors with a definite position, then the ket |ψ> is an infinite dimensional column vector. An element of this column vector ψ(x) is the probability amplitude that a particle will be found at location x.

The above is rigourously incorrect, because there are sonme subtleties for infinite dimensional spaces, but the idea is roughly ok. Take a look at the explanations in http://physics.mq.edu.au/~jcresser/Phys304/Handouts/QuantumPhysicsNotes.pdf (chapters 8-10).
 
Last edited:
  • #31
atyy said:
Roughly, a ket can be represented as a column vector. So let's say there are only 2 possible positions. Also let us choose to represent the ket |x=1> as the column vector [1 0]T, and the ket |x=2> as the column vector [0 1]T, ie. we choose as basis vectors states of definite position. An arbitary ket is then |ψ>=ψ(1)|x=1>+ψ(2)|x=2>, or equivalently as the wavefunction [ψ(1) ψ(2)]T, or equivalently the wavefunction ψ(x) where x is an index that runs from 1 to 2.

However, x actually is not discrete with only 2 values, it runs continuously. So if we use basis vectors with a definite position, then the ket |ψ> is an infinite dimensional column vector. An element of this column vector ψ(x) is the probability amplitude that a particle will be found at location x.

The above is rigourously incorrect, because there are sonme subtleties for infinite dimensional spaces, but the idea is roughly ok. Take a look at the explanations in http://physics.mq.edu.au/~jcresser/Phys304/Handouts/QuantumPhysicsNotes.pdf (chapters 8-10).

Thanks. You said an element of the column vector ψ(x). Did you mean an element of the ket |ψ> ? But you then relate it to location x. I thought kets are independent of basis ? So why would it be location x and not momentum p or some other basis ?
 
  • #32
dyn said:
Thanks. You said an element of the column vector ψ(x). Did you mean an element of the ket |ψ> ?

Before you represent a ket as a column vector, you must always choose a basis. In the above the choice of basis means we choose |x=1> to be the column vector [1 0]T, and |x=2> to be the column vector [0 1]T.

Then the ket |ψ> will be the column vector which can be written [ψ(1) ψ(2)]T, or for short ψ(x) which is an element of the column vector [ψ(1) ψ(2)]T.

dyn said:
But you then relate it to location x. I thought kets are independent of basis ? So why would it be location x and not momentum p or some other basis ?

Yes, because I chose at the start a basis in which a state with a definite position |x=1> is the column vector [1 0]T, and the state with definite position |x=2> is the column vector [0 1]T. This is why the column vector [ψ(1) ψ(2)]T is also written as [ψ(x=1) ψ(x=2)]T, or for short ψ(x) is an element of that column vector.

If at the start I had chosen to represent the state of definite momentum as the basis, eg. choose |p=1> as the column vector [1 0]T, then the elements of the column vector representing the ket |ψ> would be ψ(p).
 
  • #33
Some things are clearer now but as for the rest ; my head is spinning more and more. I just want to thank everyone who has persevered with me on this thread.
 

1. What is a wavefunction?

A wavefunction is a mathematical description of a quantum system that contains all the information about the system's physical properties, such as position, momentum, and energy.

2. What is a ket in quantum mechanics?

A ket is a mathematical representation of a quantum state in the bra-ket notation used in quantum mechanics. It is denoted by the symbol |ψ⟩ and represents the state vector of a quantum system.

3. How are wavefunctions and kets related?

Wavefunctions and kets are closely related in quantum mechanics. A wavefunction is the mathematical representation of a quantum state, while a ket is the abstract representation of a quantum state. The wavefunction can be obtained by applying a mathematical operator to the ket.

4. Can a ket represent more than one wavefunction?

No, a ket represents only one wavefunction at a time. However, a ket can represent a superposition of multiple wavefunctions, which is a combination of different states with different probabilities.

5. What is the difference between a wavefunction and a probability amplitude?

A wavefunction is a complex-valued function that describes the probability of finding a particle in a particular state. A probability amplitude, on the other hand, is the square root of the probability and is a real-valued function. It is often used to calculate the probability of a particle being in a particular state.

Similar threads

Replies
11
Views
1K
Replies
1
Views
769
  • Quantum Physics
Replies
7
Views
1K
Replies
7
Views
2K
Replies
1
Views
779
  • Quantum Physics
Replies
7
Views
928
Replies
8
Views
1K
  • Quantum Physics
Replies
9
Views
2K
  • Quantum Physics
Replies
6
Views
4K
  • Quantum Physics
Replies
5
Views
1K
Back
Top