Role of determinate states in quantum mechanics

Click For Summary
SUMMARY

The discussion centers on the role of determinate states in quantum mechanics, particularly in relation to the Schrödinger equation and the time-independent Schrödinger equation (TISE). It is established that all admissible states must solve the Schrödinger equation, and while a wave function can be expressed as a linear combination of eigenstates, it is crucial that these eigenstates also satisfy the Schrödinger equation. The Hamiltonian operator is emphasized as a key component, as it represents the total energy of the system and its eigenstates form a basis for the Hilbert space, allowing for the expression of wavefunctions in terms of energy eigenstates.

PREREQUISITES
  • Understanding of the Schrödinger equation and its solutions
  • Familiarity with the time-independent Schrödinger equation (TISE)
  • Knowledge of linear algebra, specifically eigenvalues and eigenfunctions
  • Basic concepts of quantum mechanics, including observables and operators
NEXT STEPS
  • Study the mathematical techniques for solving the Schrödinger equation, including separation of variables
  • Explore the properties and applications of Hermitian operators in quantum mechanics
  • Learn about the physical significance of the Hamiltonian and its role in quantum systems
  • Investigate the implications of expressing wavefunctions in different bases beyond the Hamiltonian basis
USEFUL FOR

Undergraduate students in quantum mechanics, physics educators, and anyone seeking to deepen their understanding of quantum states and the mathematical framework of quantum mechanics.

MatthijsRog
Messages
14
Reaction score
1
Hi,

I'm an undergrad, following my very first serious course in QM. We're following Griffith's book, and so far we're staying close to the text in terms of course structure.

Griffiths starts out his book by postulating that each and every state for any system \Psi must be a solution to the Schrödinger equation. He then introduces two things. Firstly the statistical interpretation of this function. Secondly he introduces the notion of operators \hat{Q} corresponding to observables Q so that the integral over \Psi^* \hat{Q} \Psi gives the expectancy value for the observable Q. So far I understand everything.

After working through some example solutions of the Schrödinger equation, Griffiths moves into the formalism of QM: he recasts it into linear algebra. He shows us that the operators \hat{Q} can be thought of as linear, hermitian operators that work upon functions in Hilbert space (and thus on the wave/state functions). He then introduces the notion of a determinate state: a state that is an eigenfunction of an operator. He proves that if a system exists in such a state, then the spread in the expectancy value is nil: any measurement will result in the same outcome. Additionally, since the operator is Hermitian the eigenvalues form a basis for the Hilbert space.

Now my question: is it true that not all of these eigenstates are admissible states to the problem at hand? All states must solve the Schrödinger equation. So while my wave function must be a linear combination of eigenstates, is it true that it is only a linear combination of eigenstates that solve the Schrödinger equation?

Does that mean that the "plan of attack" for quantum mechanical problems becomes the following? Instead of integrating the awful \Psi^* \hat{Q} \Psi we find the eigenstates of \hat{Q}, express our wavefunction in this basis and use the coefficients of this projection to determine the expectancy value? Does that mean we still must find \Psi by solving Schrödinger's equation?

I know it's a lot of question marks up there, but students are known to get lost at this junction and I want to make sure I really understand what I'm doing. Thanks in advance for helping out!
 
Physics news on Phys.org
MatthijsRog said:
Now my question: is it true that not all of these eigenstates are admissible states to the problem at hand? All states must solve the Schrödinger equation. So while my wave function must be a linear combination of eigenstates, is it true that it is only a linear combination of eigenstates that solve the Schrödinger equation?

There are a couple of things. First, you have to distinguish between a solution to the full Schroedinger equation for a wavefunction ##\Psi(x, t)## and a solution to the time-independent Schroedinger equation (TISE) ##\psi(x)##.

At time ##t=0## any well-behaved function, within reason, can be an initial wave-function for any potential. It's how that wave-function evolves over time that must be a solution to the (full) Schroedinger equation.

The most important operator is the Hamiltonian, which often represents the total energy of the system and in the first examples you encounter has a time-independent potential. I.e. ##V## is a function of position, but doesn't change over time. In fact, the Hamiltonian is important both mathematically and physically.

Mathematically you can solve the Schrödinger equation by separation of variables, which leads to the TISE, which is an equation for the eigenstates of the Hamiltonian. These eigenstates form a basis for the Hilbert space of your wavefunctions, so you can express the initial state as a linear combination of these:

##\Psi(x, 0) = \sum c_n \psi_n(x)##

And, from the separation of variables, the full solution is:

##\Psi(x, t) = \sum c_n \psi_n(x) exp(-iE_nt/\hbar)##

The mathematics of separation of variables aligns with the physical property that energy eigenstates evolve "independently" over time. Note that the form of the (separated) time component implies that the likelihood of any energy measurement is constant over time.

Now, it's entirely possible to express the initial wavefunction as a linear combination in any other basis. Let's say a basis ##\{u_n(x)\}## of some other observable, so that we have:

##\Psi(x, 0) = \sum d_n u_n(x)##

But, in this solution, you don't have a nice formula for how the coefficients evolve over time. This implies that the likelihood of the measurements of our observable change over time. But, buried somewhere, there is a solution there of the form:

##\Psi(x, t) = \sum d_n(t) u_n(x)##

It's just not so easy to work out what the ##d_n(t)## are, as they change over time.

MatthijsRog said:
Does that mean that the "plan of attack" for quantum mechanical problems becomes the following? Instead of integrating the awful Ψ∗^QΨ\Psi^* \hat{Q} \Psi we find the eigenstates of ^Q\hat{Q}, express our wavefunction in this basis and use the coefficients of this projection to determine the expectancy value? Does that mean we still must find Ψ\Psi by solving Schrödinger's equation?

You can do either. If you can express the wavefunction in eigenstates of your observable, then all well and good. But, as above, the format of this solution will change over time.

The advantage of using the Hamiltonian basis is that the format of the solution does not change over time. That's the big advantage of the solution in terms of energy eigenstates.
 
  • Like
Likes   Reactions: DarMM and MatthijsRog
Very clear answer, thank you very much for clarifying all those things! I had not taken the difference between the S.E. and the TISE into account.

I believe I understand most of it, but I will read your reply again in one or two weeks when I've progressed a little further to make sure it sticks.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
628
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 16 ·
Replies
16
Views
3K