Role of determinate states in quantum mechanics

  • #1
MatthijsRog
14
1
Hi,

I'm an undergrad, following my very first serious course in QM. We're following Griffith's book, and so far we're staying close to the text in terms of course structure.

Griffiths starts out his book by postulating that each and every state for any system [itex]\Psi[/itex] must be a solution to the Schrödinger equation. He then introduces two things. Firstly the statistical interpretation of this function. Secondly he introduces the notion of operators [itex]\hat{Q}[/itex] corresponding to observables [itex]Q[/itex] so that the integral over [itex]\Psi^* \hat{Q} \Psi[/itex] gives the expectancy value for the observable [itex]Q[/itex]. So far I understand everything.

After working through some example solutions of the Schrödinger equation, Griffiths moves into the formalism of QM: he recasts it into linear algebra. He shows us that the operators [itex]\hat{Q}[/itex] can be thought of as linear, hermitian operators that work upon functions in Hilbert space (and thus on the wave/state functions). He then introduces the notion of a determinate state: a state that is an eigenfunction of an operator. He proves that if a system exists in such a state, then the spread in the expectancy value is nil: any measurement will result in the same outcome. Additionally, since the operator is Hermitian the eigenvalues form a basis for the Hilbert space.

Now my question: is it true that not all of these eigenstates are admissible states to the problem at hand? All states must solve the Schrödinger equation. So while my wave function must be a linear combination of eigenstates, is it true that it is only a linear combination of eigenstates that solve the Schrödinger equation?

Does that mean that the "plan of attack" for quantum mechanical problems becomes the following? Instead of integrating the awful [itex]\Psi^* \hat{Q} \Psi[/itex] we find the eigenstates of [itex]\hat{Q}[/itex], express our wavefunction in this basis and use the coefficients of this projection to determine the expectancy value? Does that mean we still must find [itex]\Psi[/itex] by solving Schrödinger's equation?

I know it's a lot of question marks up there, but students are known to get lost at this junction and I want to make sure I really understand what I'm doing. Thanks in advance for helping out!
 
Physics news on Phys.org
  • #2
MatthijsRog said:
Now my question: is it true that not all of these eigenstates are admissible states to the problem at hand? All states must solve the Schrödinger equation. So while my wave function must be a linear combination of eigenstates, is it true that it is only a linear combination of eigenstates that solve the Schrödinger equation?

There are a couple of things. First, you have to distinguish between a solution to the full Schroedinger equation for a wavefunction ##\Psi(x, t)## and a solution to the time-independent Schroedinger equation (TISE) ##\psi(x)##.

At time ##t=0## any well-behaved function, within reason, can be an initial wave-function for any potential. It's how that wave-function evolves over time that must be a solution to the (full) Schroedinger equation.

The most important operator is the Hamiltonian, which often represents the total energy of the system and in the first examples you encounter has a time-independent potential. I.e. ##V## is a function of position, but doesn't change over time. In fact, the Hamiltonian is important both mathematically and physically.

Mathematically you can solve the Schrodinger equation by separation of variables, which leads to the TISE, which is an equation for the eigenstates of the Hamiltonian. These eigenstates form a basis for the Hilbert space of your wavefunctions, so you can express the initial state as a linear combination of these:

##\Psi(x, 0) = \sum c_n \psi_n(x)##

And, from the separation of variables, the full solution is:

##\Psi(x, t) = \sum c_n \psi_n(x) exp(-iE_nt/\hbar)##

The mathematics of separation of variables aligns with the physical property that energy eigenstates evolve "independently" over time. Note that the form of the (separated) time component implies that the likelihood of any energy measurement is constant over time.

Now, it's entirely possible to express the initial wavefunction as a linear combination in any other basis. Let's say a basis ##\{u_n(x)\}## of some other observable, so that we have:

##\Psi(x, 0) = \sum d_n u_n(x)##

But, in this solution, you don't have a nice formula for how the coefficients evolve over time. This implies that the likelihood of the measurements of our observable change over time. But, buried somewhere, there is a solution there of the form:

##\Psi(x, t) = \sum d_n(t) u_n(x)##

It's just not so easy to work out what the ##d_n(t)## are, as they change over time.

MatthijsRog said:
Does that mean that the "plan of attack" for quantum mechanical problems becomes the following? Instead of integrating the awful Ψ∗^QΨ\Psi^* \hat{Q} \Psi we find the eigenstates of ^Q\hat{Q}, express our wavefunction in this basis and use the coefficients of this projection to determine the expectancy value? Does that mean we still must find Ψ\Psi by solving Schrödinger's equation?

You can do either. If you can express the wavefunction in eigenstates of your observable, then all well and good. But, as above, the format of this solution will change over time.

The advantage of using the Hamiltonian basis is that the format of the solution does not change over time. That's the big advantage of the solution in terms of energy eigenstates.
 
  • Like
Likes DarMM and MatthijsRog
  • #3
Very clear answer, thank you very much for clarifying all those things! I had not taken the difference between the S.E. and the TISE into account.

I believe I understand most of it, but I will read your reply again in one or two weeks when I've progressed a little further to make sure it sticks.
 

Similar threads

Back
Top